status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
233
| body
stringlengths 0
186k
⌀ | issue_url
stringlengths 38
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown | updated_file
stringlengths 7
188
| chunk_content
stringlengths 1
1.03M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,472 | Pinecone: Support starter tier | ### Feature request
Adapt the pinecone vectorstore to support upcoming starter tier. The changes are related to removing namespaces and `delete by metadata` feature.
### Motivation
Indexes in upcoming Pinecone V4 won't support:
* namespaces
* `configure_index()`
* delete by metadata
* `describe_index()` with metadata filtering
* `metadata_config` parameter to `create_index()`
* `delete()` with the `deleteAll` parameter
### Your contribution
I'll do it. | https://github.com/langchain-ai/langchain/issues/7472 | https://github.com/langchain-ai/langchain/pull/7473 | 5debd5043e61d29efea661c20818b48a0f39e5a6 | 9d13dcd17c2dfab8f087bcc37e99f1181dfe5c63 | "2023-07-10T10:19:16Z" | python | "2023-07-10T15:39:47Z" | tests/integration_tests/vectorstores/test_pinecone.py | index_stats = cls.index.describe_index_stats()
for _namespace_name in index_stats["namespaces"].keys():
cls.index.delete(delete_all=True, namespace=_namespace_name)
reset_pinecone()
@pytest.fixture(autouse=True)
def setup(self) -> None:
index_stats = self.index.describe_index_stats()
for _namespace_name in index_stats["namespaces"].keys():
self.index.delete(delete_all=True, namespace=_namespace_name)
reset_pinecone()
@pytest.mark.vcr()
def test_from_texts( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,472 | Pinecone: Support starter tier | ### Feature request
Adapt the pinecone vectorstore to support upcoming starter tier. The changes are related to removing namespaces and `delete by metadata` feature.
### Motivation
Indexes in upcoming Pinecone V4 won't support:
* namespaces
* `configure_index()`
* delete by metadata
* `describe_index()` with metadata filtering
* `metadata_config` parameter to `create_index()`
* `delete()` with the `deleteAll` parameter
### Your contribution
I'll do it. | https://github.com/langchain-ai/langchain/issues/7472 | https://github.com/langchain-ai/langchain/pull/7473 | 5debd5043e61d29efea661c20818b48a0f39e5a6 | 9d13dcd17c2dfab8f087bcc37e99f1181dfe5c63 | "2023-07-10T10:19:16Z" | python | "2023-07-10T15:39:47Z" | tests/integration_tests/vectorstores/test_pinecone.py | self, texts: List[str], embedding_openai: OpenAIEmbeddings
) -> None:
"""Test end to end construction and search."""
unique_id = uuid.uuid4().hex
needs = f"foobuu {unique_id} booo"
texts.insert(0, needs)
docsearch = Pinecone.from_texts(
texts=texts,
embedding=embedding_openai,
index_name=index_name,
namespace=namespace_name,
)
output = docsearch.similarity_search(unique_id, k=1, namespace=namespace_name)
assert output == [Document(page_content=needs)]
@pytest.mark.vcr()
def test_from_texts_with_metadatas(
self, texts: List[str], embedding_openai: OpenAIEmbeddings
) -> None:
"""Test end to end construction and search."""
unique_id = uuid.uuid4().hex
needs = f"foobuu {unique_id} booo"
texts.insert(0, needs)
metadatas = [{"page": i} for i in range(len(texts))]
docsearch = Pinecone.from_texts(
texts,
embedding_openai,
index_name=index_name,
metadatas=metadatas,
namespace=namespace_name, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,472 | Pinecone: Support starter tier | ### Feature request
Adapt the pinecone vectorstore to support upcoming starter tier. The changes are related to removing namespaces and `delete by metadata` feature.
### Motivation
Indexes in upcoming Pinecone V4 won't support:
* namespaces
* `configure_index()`
* delete by metadata
* `describe_index()` with metadata filtering
* `metadata_config` parameter to `create_index()`
* `delete()` with the `deleteAll` parameter
### Your contribution
I'll do it. | https://github.com/langchain-ai/langchain/issues/7472 | https://github.com/langchain-ai/langchain/pull/7473 | 5debd5043e61d29efea661c20818b48a0f39e5a6 | 9d13dcd17c2dfab8f087bcc37e99f1181dfe5c63 | "2023-07-10T10:19:16Z" | python | "2023-07-10T15:39:47Z" | tests/integration_tests/vectorstores/test_pinecone.py | )
output = docsearch.similarity_search(needs, k=1, namespace=namespace_name)
assert output == [Document(page_content=needs, metadata={"page": 0.0})]
@pytest.mark.vcr()
def test_from_texts_with_scores(self, embedding_openai: OpenAIEmbeddings) -> None:
"""Test end to end construction and search with scores and IDs."""
texts = ["foo", "bar", "baz"]
metadatas = [{"page": i} for i in range(len(texts))]
docsearch = Pinecone.from_texts(
texts,
embedding_openai,
index_name=index_name,
metadatas=metadatas,
namespace=namespace_name,
)
output = docsearch.similarity_search_with_score(
"foo", k=3, namespace=namespace_name
)
docs = [o[0] for o in output]
scores = [o[1] for o in output]
sorted_documents = sorted(docs, key=lambda x: x.metadata["page"])
assert sorted_documents == [
Document(page_content="foo", metadata={"page": 0.0}),
Document(page_content="bar", metadata={"page": 1.0}),
Document(page_content="baz", metadata={"page": 2.0}),
]
assert scores[0] > scores[1] > scores[2]
def test_from_existing_index_with_namespaces( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,472 | Pinecone: Support starter tier | ### Feature request
Adapt the pinecone vectorstore to support upcoming starter tier. The changes are related to removing namespaces and `delete by metadata` feature.
### Motivation
Indexes in upcoming Pinecone V4 won't support:
* namespaces
* `configure_index()`
* delete by metadata
* `describe_index()` with metadata filtering
* `metadata_config` parameter to `create_index()`
* `delete()` with the `deleteAll` parameter
### Your contribution
I'll do it. | https://github.com/langchain-ai/langchain/issues/7472 | https://github.com/langchain-ai/langchain/pull/7473 | 5debd5043e61d29efea661c20818b48a0f39e5a6 | 9d13dcd17c2dfab8f087bcc37e99f1181dfe5c63 | "2023-07-10T10:19:16Z" | python | "2023-07-10T15:39:47Z" | tests/integration_tests/vectorstores/test_pinecone.py | self, embedding_openai: OpenAIEmbeddings
) -> None:
"""Test that namespaces are properly handled."""
texts_1 = ["foo", "bar", "baz"]
metadatas = [{"page": i} for i in range(len(texts_1))]
Pinecone.from_texts(
texts_1,
embedding_openai,
index_name=index_name,
metadatas=metadatas,
namespace=f"{index_name}-1",
)
texts_2 = ["foo2", "bar2", "baz2"]
metadatas = [{"page": i} for i in range(len(texts_2))]
Pinecone.from_texts(
texts_2,
embedding_openai,
index_name=index_name,
metadatas=metadatas,
namespace=f"{index_name}-2",
)
docsearch = Pinecone.from_existing_index(
index_name=index_name,
embedding=embedding_openai,
namespace=f"{index_name}-1", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,472 | Pinecone: Support starter tier | ### Feature request
Adapt the pinecone vectorstore to support upcoming starter tier. The changes are related to removing namespaces and `delete by metadata` feature.
### Motivation
Indexes in upcoming Pinecone V4 won't support:
* namespaces
* `configure_index()`
* delete by metadata
* `describe_index()` with metadata filtering
* `metadata_config` parameter to `create_index()`
* `delete()` with the `deleteAll` parameter
### Your contribution
I'll do it. | https://github.com/langchain-ai/langchain/issues/7472 | https://github.com/langchain-ai/langchain/pull/7473 | 5debd5043e61d29efea661c20818b48a0f39e5a6 | 9d13dcd17c2dfab8f087bcc37e99f1181dfe5c63 | "2023-07-10T10:19:16Z" | python | "2023-07-10T15:39:47Z" | tests/integration_tests/vectorstores/test_pinecone.py | )
output = docsearch.similarity_search("foo", k=20, namespace=f"{index_name}-1")
page_contents = sorted(set([o.page_content for o in output]))
assert all(content in ["foo", "bar", "baz"] for content in page_contents)
assert all(content not in ["foo2", "bar2", "baz2"] for content in page_contents)
def test_add_documents_with_ids(
self, texts: List[str], embedding_openai: OpenAIEmbeddings
) -> None:
ids = [uuid.uuid4().hex for _ in range(len(texts))]
Pinecone.from_texts(
texts=texts,
ids=ids,
embedding=embedding_openai,
index_name=index_name,
namespace=index_name,
)
index_stats = self.index.describe_index_stats()
assert index_stats["namespaces"][index_name]["vector_count"] == len(texts)
ids_1 = [uuid.uuid4().hex for _ in range(len(texts))]
Pinecone.from_texts(
texts=texts,
ids=ids_1,
embedding=embedding_openai,
index_name=index_name,
namespace=index_name,
)
index_stats = self.index.describe_index_stats()
assert index_stats["namespaces"][index_name]["vector_count"] == len(texts) * 2 |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,569 | Issue: Document loader for Notion DB doesn't supports some properties | ### Issue you'd like to raise.
Current version of document loader for Notion DB doesn't supports following properties for metadata
- `unique_id`
- https://www.notion.so/help/unique-id
- `status`
- https://www.notion.so/help/guides/status-property-gives-clarity-on-tasks
- `people`
- useful property when you assign some task to assignees
### Suggestion:
I would like to make a PR to fix this issue if it's okay. | https://github.com/langchain-ai/langchain/issues/7569 | https://github.com/langchain-ai/langchain/pull/7570 | 5f17c57174c88e8c00bd71216dcf44b14fee7aaf | 3f7213586e5fc5222fe6b6c889aa50776cd1c988 | "2023-07-12T00:02:03Z" | python | "2023-07-12T07:34:54Z" | langchain/document_loaders/notiondb.py | """Notion DB loader for langchain"""
from typing import Any, Dict, List, Optional
import requests
from langchain.docstore.document import Document
from langchain.document_loaders.base import BaseLoader
NOTION_BASE_URL = "https://api.notion.com/v1"
DATABASE_URL = NOTION_BASE_URL + "/databases/{database_id}/query"
PAGE_URL = NOTION_BASE_URL + "/pages/{page_id}"
BLOCK_URL = NOTION_BASE_URL + "/blocks/{block_id}/children"
class NotionDBLoader(BaseLoader): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,569 | Issue: Document loader for Notion DB doesn't supports some properties | ### Issue you'd like to raise.
Current version of document loader for Notion DB doesn't supports following properties for metadata
- `unique_id`
- https://www.notion.so/help/unique-id
- `status`
- https://www.notion.so/help/guides/status-property-gives-clarity-on-tasks
- `people`
- useful property when you assign some task to assignees
### Suggestion:
I would like to make a PR to fix this issue if it's okay. | https://github.com/langchain-ai/langchain/issues/7569 | https://github.com/langchain-ai/langchain/pull/7570 | 5f17c57174c88e8c00bd71216dcf44b14fee7aaf | 3f7213586e5fc5222fe6b6c889aa50776cd1c988 | "2023-07-12T00:02:03Z" | python | "2023-07-12T07:34:54Z" | langchain/document_loaders/notiondb.py | """Notion DB Loader.
Reads content from pages within a Notion Database.
Args:
integration_token (str): Notion integration token.
database_id (str): Notion database id.
request_timeout_sec (int): Timeout for Notion requests in seconds.
Defaults to 10.
"""
def __init__(
self,
integration_token: str,
database_id: str,
request_timeout_sec: Optional[int] = 10,
) -> None:
"""Initialize with parameters."""
if not integration_token:
raise ValueError("integration_token must be provided")
if not database_id:
raise ValueError("database_id must be provided")
self.token = integration_token
self.database_id = database_id
self.headers = {
"Authorization": "Bearer " + self.token,
"Content-Type": "application/json",
"Notion-Version": "2022-06-28",
}
self.request_timeout_sec = request_timeout_sec
def load(self) -> List[Document]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,569 | Issue: Document loader for Notion DB doesn't supports some properties | ### Issue you'd like to raise.
Current version of document loader for Notion DB doesn't supports following properties for metadata
- `unique_id`
- https://www.notion.so/help/unique-id
- `status`
- https://www.notion.so/help/guides/status-property-gives-clarity-on-tasks
- `people`
- useful property when you assign some task to assignees
### Suggestion:
I would like to make a PR to fix this issue if it's okay. | https://github.com/langchain-ai/langchain/issues/7569 | https://github.com/langchain-ai/langchain/pull/7570 | 5f17c57174c88e8c00bd71216dcf44b14fee7aaf | 3f7213586e5fc5222fe6b6c889aa50776cd1c988 | "2023-07-12T00:02:03Z" | python | "2023-07-12T07:34:54Z" | langchain/document_loaders/notiondb.py | """Load documents from the Notion database.
Returns:
List[Document]: List of documents.
"""
page_summaries = self._retrieve_page_summaries()
return list(self.load_page(page_summary) for page_summary in page_summaries)
def _retrieve_page_summaries(
self, query_dict: Dict[str, Any] = {"page_size": 100}
) -> List[Dict[str, Any]]:
"""Get all the pages from a Notion database."""
pages: List[Dict[str, Any]] = []
while True:
data = self._request(
DATABASE_URL.format(database_id=self.database_id),
method="POST",
query_dict=query_dict,
)
pages.extend(data.get("results"))
if not data.get("has_more"):
break
query_dict["start_cursor"] = data.get("next_cursor")
return pages
def load_page(self, page_summary: Dict[str, Any]) -> Document: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,569 | Issue: Document loader for Notion DB doesn't supports some properties | ### Issue you'd like to raise.
Current version of document loader for Notion DB doesn't supports following properties for metadata
- `unique_id`
- https://www.notion.so/help/unique-id
- `status`
- https://www.notion.so/help/guides/status-property-gives-clarity-on-tasks
- `people`
- useful property when you assign some task to assignees
### Suggestion:
I would like to make a PR to fix this issue if it's okay. | https://github.com/langchain-ai/langchain/issues/7569 | https://github.com/langchain-ai/langchain/pull/7570 | 5f17c57174c88e8c00bd71216dcf44b14fee7aaf | 3f7213586e5fc5222fe6b6c889aa50776cd1c988 | "2023-07-12T00:02:03Z" | python | "2023-07-12T07:34:54Z" | langchain/document_loaders/notiondb.py | """Read a page.
Args:
page_summary: Page summary from Notion API.
"""
page_id = page_summary["id"]
metadata: Dict[str, Any] = {}
for prop_name, prop_data in page_summary["properties"].items():
prop_type = prop_data["type"]
if prop_type == "rich_text":
value = (
prop_data["rich_text"][0]["plain_text"]
if prop_data["rich_text"]
else None
)
elif prop_type == "title":
value = (
prop_data["title"][0]["plain_text"] if prop_data["title"] else None
)
elif prop_type == "multi_select":
value = (
[item["name"] for item in prop_data["multi_select"]]
if prop_data["multi_select"]
else []
)
elif prop_type == "url":
value = prop_data["url"] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,569 | Issue: Document loader for Notion DB doesn't supports some properties | ### Issue you'd like to raise.
Current version of document loader for Notion DB doesn't supports following properties for metadata
- `unique_id`
- https://www.notion.so/help/unique-id
- `status`
- https://www.notion.so/help/guides/status-property-gives-clarity-on-tasks
- `people`
- useful property when you assign some task to assignees
### Suggestion:
I would like to make a PR to fix this issue if it's okay. | https://github.com/langchain-ai/langchain/issues/7569 | https://github.com/langchain-ai/langchain/pull/7570 | 5f17c57174c88e8c00bd71216dcf44b14fee7aaf | 3f7213586e5fc5222fe6b6c889aa50776cd1c988 | "2023-07-12T00:02:03Z" | python | "2023-07-12T07:34:54Z" | langchain/document_loaders/notiondb.py | else:
value = None
metadata[prop_name.lower()] = value
metadata["id"] = page_id
return Document(page_content=self._load_blocks(page_id), metadata=metadata)
def _load_blocks(self, block_id: str, num_tabs: int = 0) -> str:
"""Read a block and its children."""
result_lines_arr: List[str] = []
cur_block_id: str = block_id
while cur_block_id:
data = self._request(BLOCK_URL.format(block_id=cur_block_id))
for result in data["results"]:
result_obj = result[result["type"]]
if "rich_text" not in result_obj:
continue
cur_result_text_arr: List[str] = []
for rich_text in result_obj["rich_text"]:
if "text" in rich_text:
cur_result_text_arr.append(
"\t" * num_tabs + rich_text["text"]["content"]
)
if result["has_children"]:
children_text = self._load_blocks(
result["id"], num_tabs=num_tabs + 1
)
cur_result_text_arr.append(children_text)
result_lines_arr.append("\n".join(cur_result_text_arr))
cur_block_id = data.get("next_cursor")
return "\n".join(result_lines_arr)
def _request( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,569 | Issue: Document loader for Notion DB doesn't supports some properties | ### Issue you'd like to raise.
Current version of document loader for Notion DB doesn't supports following properties for metadata
- `unique_id`
- https://www.notion.so/help/unique-id
- `status`
- https://www.notion.so/help/guides/status-property-gives-clarity-on-tasks
- `people`
- useful property when you assign some task to assignees
### Suggestion:
I would like to make a PR to fix this issue if it's okay. | https://github.com/langchain-ai/langchain/issues/7569 | https://github.com/langchain-ai/langchain/pull/7570 | 5f17c57174c88e8c00bd71216dcf44b14fee7aaf | 3f7213586e5fc5222fe6b6c889aa50776cd1c988 | "2023-07-12T00:02:03Z" | python | "2023-07-12T07:34:54Z" | langchain/document_loaders/notiondb.py | self, url: str, method: str = "GET", query_dict: Dict[str, Any] = {}
) -> Any:
res = requests.request(
method,
url,
headers=self.headers,
json=query_dict,
timeout=self.request_timeout_sec,
)
res.raise_for_status()
return res.json() |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,571 | AmazonKendraRetriever "Could not load credentials" error in latest release | ### System Info
LangChain version: 0.0.229
Platform: AWS Lambda execution
Python version: 3.9
I get the following error when creating the AmazonKendraRetriever using LangChain version 0.0.229.
Code to create retriever:
`retriever = AmazonKendraRetriever(index_id=kendra_index)`
Error:
```[ERROR] ValidationError: 1 validation error for AmazonKendraRetriever
__root__
Could not load credentials to authenticate with AWS client. Please check that credentials in the specified profile name are valid. (type=value_error)
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 171, in lambda_handler
retriever = AmazonKendraRetriever(index_id=kendra_index)
File "/opt/python/langchain/load/serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__```
When using LangChain version 0.0.219 this error does not occur.
Issue also raised on aws-samples git repo with potential solution: https://github.com/aws-samples/amazon-kendra-langchain-extensions/issues/24
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Install latest version of Langchain
2. Follow instructions here: https://python.langchain.com/docs/modules/data_connection/retrievers/integrations/amazon_kendra_retriever
### Expected behavior
Error not thrown when creating AmazonKendraRetriever | https://github.com/langchain-ai/langchain/issues/7571 | https://github.com/langchain-ai/langchain/pull/7629 | 0e1d7a27c62b15fba6bcafc5f8ac996d57e0b1d3 | f11d845dee355709b41dec36dcc7c74f7b90c7ec | "2023-07-12T00:16:40Z" | python | "2023-07-13T03:47:35Z" | langchain/retrievers/kendra.py | import re
from typing import Any, Dict, List, Literal, Optional
from pydantic import BaseModel, Extra, root_validator
from langchain.callbacks.manager import (
AsyncCallbackManagerForRetrieverRun,
CallbackManagerForRetrieverRun,
)
from langchain.docstore.document import Document
from langchain.schema import BaseRetriever
def clean_excerpt(excerpt: str) -> str:
"""Cleans an excerpt from Kendra.
Args:
excerpt: The excerpt to clean.
Returns:
The cleaned excerpt.
"""
if not excerpt:
return excerpt
res = re.sub("\s+", " ", excerpt).replace("...", "")
return res
def combined_text(title: str, excerpt: str) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,571 | AmazonKendraRetriever "Could not load credentials" error in latest release | ### System Info
LangChain version: 0.0.229
Platform: AWS Lambda execution
Python version: 3.9
I get the following error when creating the AmazonKendraRetriever using LangChain version 0.0.229.
Code to create retriever:
`retriever = AmazonKendraRetriever(index_id=kendra_index)`
Error:
```[ERROR] ValidationError: 1 validation error for AmazonKendraRetriever
__root__
Could not load credentials to authenticate with AWS client. Please check that credentials in the specified profile name are valid. (type=value_error)
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 171, in lambda_handler
retriever = AmazonKendraRetriever(index_id=kendra_index)
File "/opt/python/langchain/load/serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__```
When using LangChain version 0.0.219 this error does not occur.
Issue also raised on aws-samples git repo with potential solution: https://github.com/aws-samples/amazon-kendra-langchain-extensions/issues/24
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Install latest version of Langchain
2. Follow instructions here: https://python.langchain.com/docs/modules/data_connection/retrievers/integrations/amazon_kendra_retriever
### Expected behavior
Error not thrown when creating AmazonKendraRetriever | https://github.com/langchain-ai/langchain/issues/7571 | https://github.com/langchain-ai/langchain/pull/7629 | 0e1d7a27c62b15fba6bcafc5f8ac996d57e0b1d3 | f11d845dee355709b41dec36dcc7c74f7b90c7ec | "2023-07-12T00:16:40Z" | python | "2023-07-13T03:47:35Z" | langchain/retrievers/kendra.py | """Combines a title and an excerpt into a single string.
Args:
title: The title of the document.
excerpt: The excerpt of the document.
Returns:
The combined text.
"""
if not title or not excerpt:
return ""
return f"Document Title: {title} \nDocument Excerpt: \n{excerpt}\n"
class Highlight(BaseModel, extra=Extra.allow):
BeginOffset: int
EndOffset: int
TopAnswer: Optional[bool]
Type: Optional[str]
class TextWithHighLights(BaseModel, extra=Extra.allow):
Text: str
Highlights: Optional[Any]
class AdditionalResultAttributeValue(BaseModel, extra=Extra.allow):
TextWithHighlightsValue: TextWithHighLights
class AdditionalResultAttribute(BaseModel, extra=Extra.allow):
Key: str
ValueType: Literal["TEXT_WITH_HIGHLIGHTS_VALUE"]
Value: AdditionalResultAttributeValue
def get_value_text(self) -> str:
return self.Value.TextWithHighlightsValue.Text
class QueryResultItem(BaseModel, extra=Extra.allow): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,571 | AmazonKendraRetriever "Could not load credentials" error in latest release | ### System Info
LangChain version: 0.0.229
Platform: AWS Lambda execution
Python version: 3.9
I get the following error when creating the AmazonKendraRetriever using LangChain version 0.0.229.
Code to create retriever:
`retriever = AmazonKendraRetriever(index_id=kendra_index)`
Error:
```[ERROR] ValidationError: 1 validation error for AmazonKendraRetriever
__root__
Could not load credentials to authenticate with AWS client. Please check that credentials in the specified profile name are valid. (type=value_error)
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 171, in lambda_handler
retriever = AmazonKendraRetriever(index_id=kendra_index)
File "/opt/python/langchain/load/serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__```
When using LangChain version 0.0.219 this error does not occur.
Issue also raised on aws-samples git repo with potential solution: https://github.com/aws-samples/amazon-kendra-langchain-extensions/issues/24
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Install latest version of Langchain
2. Follow instructions here: https://python.langchain.com/docs/modules/data_connection/retrievers/integrations/amazon_kendra_retriever
### Expected behavior
Error not thrown when creating AmazonKendraRetriever | https://github.com/langchain-ai/langchain/issues/7571 | https://github.com/langchain-ai/langchain/pull/7629 | 0e1d7a27c62b15fba6bcafc5f8ac996d57e0b1d3 | f11d845dee355709b41dec36dcc7c74f7b90c7ec | "2023-07-12T00:16:40Z" | python | "2023-07-13T03:47:35Z" | langchain/retrievers/kendra.py | DocumentId: str
DocumentTitle: TextWithHighLights
DocumentURI: Optional[str]
FeedbackToken: Optional[str]
Format: Optional[str]
Id: Optional[str]
Type: Optional[str]
AdditionalAttributes: Optional[List[AdditionalResultAttribute]] = []
DocumentExcerpt: Optional[TextWithHighLights]
def get_attribute_value(self) -> str:
if not self.AdditionalAttributes:
return ""
if not self.AdditionalAttributes[0]:
return ""
else:
return self.AdditionalAttributes[0].get_value_text()
def get_excerpt(self) -> str:
if (
self.AdditionalAttributes
and self.AdditionalAttributes[0].Key == "AnswerText"
):
excerpt = self.get_attribute_value()
elif self.DocumentExcerpt:
excerpt = self.DocumentExcerpt.Text
else:
excerpt = ""
return clean_excerpt(excerpt)
def to_doc(self) -> Document: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,571 | AmazonKendraRetriever "Could not load credentials" error in latest release | ### System Info
LangChain version: 0.0.229
Platform: AWS Lambda execution
Python version: 3.9
I get the following error when creating the AmazonKendraRetriever using LangChain version 0.0.229.
Code to create retriever:
`retriever = AmazonKendraRetriever(index_id=kendra_index)`
Error:
```[ERROR] ValidationError: 1 validation error for AmazonKendraRetriever
__root__
Could not load credentials to authenticate with AWS client. Please check that credentials in the specified profile name are valid. (type=value_error)
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 171, in lambda_handler
retriever = AmazonKendraRetriever(index_id=kendra_index)
File "/opt/python/langchain/load/serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__```
When using LangChain version 0.0.219 this error does not occur.
Issue also raised on aws-samples git repo with potential solution: https://github.com/aws-samples/amazon-kendra-langchain-extensions/issues/24
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Install latest version of Langchain
2. Follow instructions here: https://python.langchain.com/docs/modules/data_connection/retrievers/integrations/amazon_kendra_retriever
### Expected behavior
Error not thrown when creating AmazonKendraRetriever | https://github.com/langchain-ai/langchain/issues/7571 | https://github.com/langchain-ai/langchain/pull/7629 | 0e1d7a27c62b15fba6bcafc5f8ac996d57e0b1d3 | f11d845dee355709b41dec36dcc7c74f7b90c7ec | "2023-07-12T00:16:40Z" | python | "2023-07-13T03:47:35Z" | langchain/retrievers/kendra.py | title = self.DocumentTitle.Text
source = self.DocumentURI
excerpt = self.get_excerpt()
type = self.Type
page_content = combined_text(title, excerpt)
metadata = {"source": source, "title": title, "excerpt": excerpt, "type": type}
return Document(page_content=page_content, metadata=metadata)
class QueryResult(BaseModel, extra=Extra.allow):
ResultItems: List[QueryResultItem]
def get_top_k_docs(self, top_n: int) -> List[Document]:
items_len = len(self.ResultItems)
count = items_len if items_len < top_n else top_n
docs = [self.ResultItems[i].to_doc() for i in range(0, count)]
return docs
class DocumentAttributeValue(BaseModel, extra=Extra.allow):
DateValue: Optional[str]
LongValue: Optional[int]
StringListValue: Optional[List[str]]
StringValue: Optional[str]
class DocumentAttribute(BaseModel, extra=Extra.allow): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,571 | AmazonKendraRetriever "Could not load credentials" error in latest release | ### System Info
LangChain version: 0.0.229
Platform: AWS Lambda execution
Python version: 3.9
I get the following error when creating the AmazonKendraRetriever using LangChain version 0.0.229.
Code to create retriever:
`retriever = AmazonKendraRetriever(index_id=kendra_index)`
Error:
```[ERROR] ValidationError: 1 validation error for AmazonKendraRetriever
__root__
Could not load credentials to authenticate with AWS client. Please check that credentials in the specified profile name are valid. (type=value_error)
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 171, in lambda_handler
retriever = AmazonKendraRetriever(index_id=kendra_index)
File "/opt/python/langchain/load/serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__```
When using LangChain version 0.0.219 this error does not occur.
Issue also raised on aws-samples git repo with potential solution: https://github.com/aws-samples/amazon-kendra-langchain-extensions/issues/24
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Install latest version of Langchain
2. Follow instructions here: https://python.langchain.com/docs/modules/data_connection/retrievers/integrations/amazon_kendra_retriever
### Expected behavior
Error not thrown when creating AmazonKendraRetriever | https://github.com/langchain-ai/langchain/issues/7571 | https://github.com/langchain-ai/langchain/pull/7629 | 0e1d7a27c62b15fba6bcafc5f8ac996d57e0b1d3 | f11d845dee355709b41dec36dcc7c74f7b90c7ec | "2023-07-12T00:16:40Z" | python | "2023-07-13T03:47:35Z" | langchain/retrievers/kendra.py | Key: str
Value: DocumentAttributeValue
class RetrieveResultItem(BaseModel, extra=Extra.allow):
Content: Optional[str]
DocumentAttributes: Optional[List[DocumentAttribute]] = []
DocumentId: Optional[str]
DocumentTitle: Optional[str]
DocumentURI: Optional[str]
Id: Optional[str]
def get_excerpt(self) -> str:
if not self.Content:
return ""
return clean_excerpt(self.Content)
def to_doc(self) -> Document:
title = self.DocumentTitle if self.DocumentTitle else ""
source = self.DocumentURI
excerpt = self.get_excerpt()
page_content = combined_text(title, excerpt)
metadata = {"source": source, "title": title, "excerpt": excerpt}
return Document(page_content=page_content, metadata=metadata)
class RetrieveResult(BaseModel, extra=Extra.allow):
QueryId: str
ResultItems: List[RetrieveResultItem]
def get_top_k_docs(self, top_n: int) -> List[Document]:
items_len = len(self.ResultItems)
count = items_len if items_len < top_n else top_n
docs = [self.ResultItems[i].to_doc() for i in range(0, count)]
return docs
class AmazonKendraRetriever(BaseRetriever): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,571 | AmazonKendraRetriever "Could not load credentials" error in latest release | ### System Info
LangChain version: 0.0.229
Platform: AWS Lambda execution
Python version: 3.9
I get the following error when creating the AmazonKendraRetriever using LangChain version 0.0.229.
Code to create retriever:
`retriever = AmazonKendraRetriever(index_id=kendra_index)`
Error:
```[ERROR] ValidationError: 1 validation error for AmazonKendraRetriever
__root__
Could not load credentials to authenticate with AWS client. Please check that credentials in the specified profile name are valid. (type=value_error)
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 171, in lambda_handler
retriever = AmazonKendraRetriever(index_id=kendra_index)
File "/opt/python/langchain/load/serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__```
When using LangChain version 0.0.219 this error does not occur.
Issue also raised on aws-samples git repo with potential solution: https://github.com/aws-samples/amazon-kendra-langchain-extensions/issues/24
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Install latest version of Langchain
2. Follow instructions here: https://python.langchain.com/docs/modules/data_connection/retrievers/integrations/amazon_kendra_retriever
### Expected behavior
Error not thrown when creating AmazonKendraRetriever | https://github.com/langchain-ai/langchain/issues/7571 | https://github.com/langchain-ai/langchain/pull/7629 | 0e1d7a27c62b15fba6bcafc5f8ac996d57e0b1d3 | f11d845dee355709b41dec36dcc7c74f7b90c7ec | "2023-07-12T00:16:40Z" | python | "2023-07-13T03:47:35Z" | langchain/retrievers/kendra.py | """Retriever class to query documents from Amazon Kendra Index.
Args:
index_id: Kendra index id
region_name: The aws region e.g., `us-west-2`.
Fallsback to AWS_DEFAULT_REGION env variable
or region specified in ~/.aws/config.
credentials_profile_name: The name of the profile in the ~/.aws/credentials
or ~/.aws/config files, which has either access keys or role information
specified. If not specified, the default credential profile or, if on an
EC2 instance, credentials from IMDS will be used.
top_k: No of results to return
attribute_filter: Additional filtering of results based on metadata
See: https://docs.aws.amazon.com/kendra/latest/APIReference
client: boto3 client for Kendra
Example:
.. code-block:: python
retriever = AmazonKendraRetriever(
index_id="c0806df7-e76b-4bce-9b5c-d5582f6b1a03"
)
"""
index_id: str
region_name: Optional[str] = None
credentials_profile_name: Optional[str] = None
top_k: int = 3
attribute_filter: Optional[Dict] = None
client: Any
@root_validator(pre=True)
def create_client(cls, values: Dict[str, Any]) -> Dict[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,571 | AmazonKendraRetriever "Could not load credentials" error in latest release | ### System Info
LangChain version: 0.0.229
Platform: AWS Lambda execution
Python version: 3.9
I get the following error when creating the AmazonKendraRetriever using LangChain version 0.0.229.
Code to create retriever:
`retriever = AmazonKendraRetriever(index_id=kendra_index)`
Error:
```[ERROR] ValidationError: 1 validation error for AmazonKendraRetriever
__root__
Could not load credentials to authenticate with AWS client. Please check that credentials in the specified profile name are valid. (type=value_error)
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 171, in lambda_handler
retriever = AmazonKendraRetriever(index_id=kendra_index)
File "/opt/python/langchain/load/serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__```
When using LangChain version 0.0.219 this error does not occur.
Issue also raised on aws-samples git repo with potential solution: https://github.com/aws-samples/amazon-kendra-langchain-extensions/issues/24
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Install latest version of Langchain
2. Follow instructions here: https://python.langchain.com/docs/modules/data_connection/retrievers/integrations/amazon_kendra_retriever
### Expected behavior
Error not thrown when creating AmazonKendraRetriever | https://github.com/langchain-ai/langchain/issues/7571 | https://github.com/langchain-ai/langchain/pull/7629 | 0e1d7a27c62b15fba6bcafc5f8ac996d57e0b1d3 | f11d845dee355709b41dec36dcc7c74f7b90c7ec | "2023-07-12T00:16:40Z" | python | "2023-07-13T03:47:35Z" | langchain/retrievers/kendra.py | if values.get("client") is not None:
return values
try:
import boto3
if values["credentials_profile_name"] is not None:
session = boto3.Session(profile_name=values["credentials_profile_name"])
else:
session = boto3.Session()
client_params = {}
if values["region_name"] is not None:
client_params["region_name"] = values["region_name"]
values["client"] = session.client("kendra", **client_params)
return values
except ImportError:
raise ModuleNotFoundError(
"Could not import boto3 python package. "
"Please install it with `pip install boto3`."
)
except Exception as e:
raise ValueError(
"Could not load credentials to authenticate with AWS client. "
"Please check that credentials in the specified "
"profile name are valid."
) from e
def _kendra_query( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,571 | AmazonKendraRetriever "Could not load credentials" error in latest release | ### System Info
LangChain version: 0.0.229
Platform: AWS Lambda execution
Python version: 3.9
I get the following error when creating the AmazonKendraRetriever using LangChain version 0.0.229.
Code to create retriever:
`retriever = AmazonKendraRetriever(index_id=kendra_index)`
Error:
```[ERROR] ValidationError: 1 validation error for AmazonKendraRetriever
__root__
Could not load credentials to authenticate with AWS client. Please check that credentials in the specified profile name are valid. (type=value_error)
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 171, in lambda_handler
retriever = AmazonKendraRetriever(index_id=kendra_index)
File "/opt/python/langchain/load/serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__```
When using LangChain version 0.0.219 this error does not occur.
Issue also raised on aws-samples git repo with potential solution: https://github.com/aws-samples/amazon-kendra-langchain-extensions/issues/24
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Install latest version of Langchain
2. Follow instructions here: https://python.langchain.com/docs/modules/data_connection/retrievers/integrations/amazon_kendra_retriever
### Expected behavior
Error not thrown when creating AmazonKendraRetriever | https://github.com/langchain-ai/langchain/issues/7571 | https://github.com/langchain-ai/langchain/pull/7629 | 0e1d7a27c62b15fba6bcafc5f8ac996d57e0b1d3 | f11d845dee355709b41dec36dcc7c74f7b90c7ec | "2023-07-12T00:16:40Z" | python | "2023-07-13T03:47:35Z" | langchain/retrievers/kendra.py | self,
query: str,
top_k: int,
attribute_filter: Optional[Dict] = None,
) -> List[Document]:
if attribute_filter is not None:
response = self.client.retrieve(
IndexId=self.index_id,
QueryText=query.strip(),
PageSize=top_k,
AttributeFilter=attribute_filter,
)
else:
response = self.client.retrieve(
IndexId=self.index_id, QueryText=query.strip(), PageSize=top_k
)
r_result = RetrieveResult.parse_obj(response)
result_len = len(r_result.ResultItems)
if result_len == 0:
if attribute_filter is not None:
response = self.client.query(
IndexId=self.index_id,
QueryText=query.strip(),
PageSize=top_k,
AttributeFilter=attribute_filter,
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,571 | AmazonKendraRetriever "Could not load credentials" error in latest release | ### System Info
LangChain version: 0.0.229
Platform: AWS Lambda execution
Python version: 3.9
I get the following error when creating the AmazonKendraRetriever using LangChain version 0.0.229.
Code to create retriever:
`retriever = AmazonKendraRetriever(index_id=kendra_index)`
Error:
```[ERROR] ValidationError: 1 validation error for AmazonKendraRetriever
__root__
Could not load credentials to authenticate with AWS client. Please check that credentials in the specified profile name are valid. (type=value_error)
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 171, in lambda_handler
retriever = AmazonKendraRetriever(index_id=kendra_index)
File "/opt/python/langchain/load/serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__```
When using LangChain version 0.0.219 this error does not occur.
Issue also raised on aws-samples git repo with potential solution: https://github.com/aws-samples/amazon-kendra-langchain-extensions/issues/24
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Install latest version of Langchain
2. Follow instructions here: https://python.langchain.com/docs/modules/data_connection/retrievers/integrations/amazon_kendra_retriever
### Expected behavior
Error not thrown when creating AmazonKendraRetriever | https://github.com/langchain-ai/langchain/issues/7571 | https://github.com/langchain-ai/langchain/pull/7629 | 0e1d7a27c62b15fba6bcafc5f8ac996d57e0b1d3 | f11d845dee355709b41dec36dcc7c74f7b90c7ec | "2023-07-12T00:16:40Z" | python | "2023-07-13T03:47:35Z" | langchain/retrievers/kendra.py | else:
response = self.client.query(
IndexId=self.index_id, QueryText=query.strip(), PageSize=top_k
)
q_result = QueryResult.parse_obj(response)
docs = q_result.get_top_k_docs(top_k)
else:
docs = r_result.get_top_k_docs(top_k)
return docs
def _get_relevant_documents(
self,
query: str,
*,
run_manager: CallbackManagerForRetrieverRun,
) -> List[Document]:
"""Run search on Kendra index and get top k documents
Example:
.. code-block:: python
docs = retriever.get_relevant_documents('This is my query')
"""
docs = self._kendra_query(query, self.top_k, self.attribute_filter)
return docs
async def _aget_relevant_documents(
self,
query: str,
*,
run_manager: AsyncCallbackManagerForRetrieverRun,
) -> List[Document]:
raise NotImplementedError("Async version is not implemented for Kendra yet.") |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | """Chain that takes in an input and produces an action and action input."""
from __future__ import annotations
import asyncio
import json
import logging
import time
from abc import abstractmethod
from pathlib import Path
from typing import Any, Callable, Dict, List, Optional, Sequence, Tuple, Union |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | import yaml
from pydantic import BaseModel, root_validator
from langchain.agents.agent_types import AgentType
from langchain.agents.tools import InvalidTool
from langchain.callbacks.base import BaseCallbackManager
from langchain.callbacks.manager import (
AsyncCallbackManagerForChainRun,
AsyncCallbackManagerForToolRun,
CallbackManagerForChainRun,
CallbackManagerForToolRun,
Callbacks,
)
from langchain.chains.base import Chain
from langchain.chains.llm import LLMChain
from langchain.input import get_color_mapping
from langchain.prompts.few_shot import FewShotPromptTemplate
from langchain.prompts.prompt import PromptTemplate
from langchain.schema import (
AgentAction,
AgentFinish,
BaseOutputParser,
BasePromptTemplate,
OutputParserException,
)
from langchain.schema.language_model import BaseLanguageModel
from langchain.schema.messages import BaseMessage
from langchain.tools.base import BaseTool
from langchain.utilities.asyncio import asyncio_timeout
logger = logging.getLogger(__name__)
class BaseSingleActionAgent(BaseModel): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | """Base Agent class."""
@property
def return_values(self) -> List[str]:
"""Return values of the agent."""
return ["output"]
def get_allowed_tools(self) -> Optional[List[str]]:
return None
@abstractmethod
def plan(
self,
intermediate_steps: List[Tuple[AgentAction, str]],
callbacks: Callbacks = None,
**kwargs: Any,
) -> Union[AgentAction, AgentFinish]:
"""Given input, decided what to do.
Args:
intermediate_steps: Steps the LLM has taken to date,
along with observations
callbacks: Callbacks to run.
**kwargs: User inputs.
Returns:
Action specifying what tool to use.
"""
@abstractmethod
async def aplan( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | self,
intermediate_steps: List[Tuple[AgentAction, str]],
callbacks: Callbacks = None,
**kwargs: Any,
) -> Union[AgentAction, AgentFinish]:
"""Given input, decided what to do.
Args:
intermediate_steps: Steps the LLM has taken to date,
along with observations
callbacks: Callbacks to run.
**kwargs: User inputs.
Returns:
Action specifying what tool to use.
"""
@property
@abstractmethod
def input_keys(self) -> List[str]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | """Return the input keys.
:meta private:
"""
def return_stopped_response(
self,
early_stopping_method: str,
intermediate_steps: List[Tuple[AgentAction, str]],
**kwargs: Any,
) -> AgentFinish:
"""Return response when agent has been stopped due to max iterations."""
if early_stopping_method == "force":
return AgentFinish(
{"output": "Agent stopped due to iteration limit or time limit."}, ""
)
else:
raise ValueError(
f"Got unsupported early_stopping_method `{early_stopping_method}`"
)
@classmethod
def from_llm_and_tools( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | cls,
llm: BaseLanguageModel,
tools: Sequence[BaseTool],
callback_manager: Optional[BaseCallbackManager] = None,
**kwargs: Any,
) -> BaseSingleActionAgent:
raise NotImplementedError
@property
def _agent_type(self) -> str:
"""Return Identifier of agent type."""
raise NotImplementedError
def dict(self, **kwargs: Any) -> Dict:
"""Return dictionary representation of agent."""
_dict = super().dict()
_type = self._agent_type
if isinstance(_type, AgentType):
_dict["_type"] = str(_type.value)
else:
_dict["_type"] = _type
return _dict
def save(self, file_path: Union[Path, str]) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | """Save the agent.
Args:
file_path: Path to file to save the agent to.
Example:
.. code-block:: python
# If working with agent executor
agent.agent.save(file_path="path/agent.yaml")
"""
if isinstance(file_path, str):
save_path = Path(file_path)
else:
save_path = file_path
directory_path = save_path.parent
directory_path.mkdir(parents=True, exist_ok=True)
agent_dict = self.dict()
if save_path.suffix == ".json":
with open(file_path, "w") as f:
json.dump(agent_dict, f, indent=4)
elif save_path.suffix == ".yaml":
with open(file_path, "w") as f:
yaml.dump(agent_dict, f, default_flow_style=False)
else:
raise ValueError(f"{save_path} must be json or yaml")
def tool_run_logging_kwargs(self) -> Dict:
return {}
class BaseMultiActionAgent(BaseModel): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | """Base Agent class."""
@property
def return_values(self) -> List[str]:
"""Return values of the agent."""
return ["output"]
def get_allowed_tools(self) -> Optional[List[str]]:
return None
@abstractmethod
def plan(
self,
intermediate_steps: List[Tuple[AgentAction, str]],
callbacks: Callbacks = None,
**kwargs: Any,
) -> Union[List[AgentAction], AgentFinish]:
"""Given input, decided what to do.
Args:
intermediate_steps: Steps the LLM has taken to date,
along with observations
callbacks: Callbacks to run.
**kwargs: User inputs.
Returns:
Actions specifying what tool to use.
"""
@abstractmethod
async def aplan( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | self,
intermediate_steps: List[Tuple[AgentAction, str]],
callbacks: Callbacks = None,
**kwargs: Any,
) -> Union[List[AgentAction], AgentFinish]:
"""Given input, decided what to do.
Args:
intermediate_steps: Steps the LLM has taken to date,
along with observations
callbacks: Callbacks to run.
**kwargs: User inputs.
Returns:
Actions specifying what tool to use.
"""
@property
@abstractmethod
def input_keys(self) -> List[str]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | """Return the input keys.
:meta private:
"""
def return_stopped_response(
self,
early_stopping_method: str,
intermediate_steps: List[Tuple[AgentAction, str]],
**kwargs: Any,
) -> AgentFinish:
"""Return response when agent has been stopped due to max iterations."""
if early_stopping_method == "force":
return AgentFinish({"output": "Agent stopped due to max iterations."}, "")
else:
raise ValueError(
f"Got unsupported early_stopping_method `{early_stopping_method}`"
)
@property
def _agent_type(self) -> str:
"""Return Identifier of agent type."""
raise NotImplementedError
def dict(self, **kwargs: Any) -> Dict:
"""Return dictionary representation of agent."""
_dict = super().dict()
_dict["_type"] = str(self._agent_type)
return _dict
def save(self, file_path: Union[Path, str]) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | """Save the agent.
Args:
file_path: Path to file to save the agent to.
Example:
.. code-block:: python
# If working with agent executor
agent.agent.save(file_path="path/agent.yaml")
"""
if isinstance(file_path, str):
save_path = Path(file_path)
else:
save_path = file_path
directory_path = save_path.parent
directory_path.mkdir(parents=True, exist_ok=True)
agent_dict = self.dict()
if save_path.suffix == ".json":
with open(file_path, "w") as f:
json.dump(agent_dict, f, indent=4)
elif save_path.suffix == ".yaml":
with open(file_path, "w") as f:
yaml.dump(agent_dict, f, default_flow_style=False)
else:
raise ValueError(f"{save_path} must be json or yaml")
def tool_run_logging_kwargs(self) -> Dict:
return {}
class AgentOutputParser(BaseOutputParser): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | @abstractmethod
def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
"""Parse text into agent action/finish."""
class LLMSingleActionAgent(BaseSingleActionAgent):
llm_chain: LLMChain
output_parser: AgentOutputParser
stop: List[str]
@property
def input_keys(self) -> List[str]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | return list(set(self.llm_chain.input_keys) - {"intermediate_steps"})
def dict(self, **kwargs: Any) -> Dict:
"""Return dictionary representation of agent."""
_dict = super().dict()
del _dict["output_parser"]
return _dict
def plan(
self,
intermediate_steps: List[Tuple[AgentAction, str]],
callbacks: Callbacks = None,
**kwargs: Any,
) -> Union[AgentAction, AgentFinish]:
"""Given input, decided what to do.
Args:
intermediate_steps: Steps the LLM has taken to date,
along with observations
callbacks: Callbacks to run.
**kwargs: User inputs.
Returns:
Action specifying what tool to use.
"""
output = self.llm_chain.run(
intermediate_steps=intermediate_steps,
stop=self.stop,
callbacks=callbacks,
**kwargs,
)
return self.output_parser.parse(output)
async def aplan( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | self,
intermediate_steps: List[Tuple[AgentAction, str]],
callbacks: Callbacks = None,
**kwargs: Any,
) -> Union[AgentAction, AgentFinish]:
"""Given input, decided what to do.
Args:
intermediate_steps: Steps the LLM has taken to date,
along with observations
callbacks: Callbacks to run.
**kwargs: User inputs.
Returns:
Action specifying what tool to use.
"""
output = await self.llm_chain.arun(
intermediate_steps=intermediate_steps,
stop=self.stop,
callbacks=callbacks,
**kwargs,
)
return self.output_parser.parse(output)
def tool_run_logging_kwargs(self) -> Dict:
return {
"llm_prefix": "",
"observation_prefix": "" if len(self.stop) == 0 else self.stop[0],
}
class Agent(BaseSingleActionAgent): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | """Class responsible for calling the language model and deciding the action.
This is driven by an LLMChain. The prompt in the LLMChain MUST include
a variable called "agent_scratchpad" where the agent can put its
intermediary work.
"""
llm_chain: LLMChain
output_parser: AgentOutputParser
allowed_tools: Optional[List[str]] = None
def dict(self, **kwargs: Any) -> Dict:
"""Return dictionary representation of agent."""
_dict = super().dict()
del _dict["output_parser"]
return _dict
def get_allowed_tools(self) -> Optional[List[str]]:
return self.allowed_tools
@property
def return_values(self) -> List[str]:
return ["output"]
def _fix_text(self, text: str) -> str:
"""Fix the text."""
raise ValueError("fix_text not implemented for this agent.")
@property
def _stop(self) -> List[str]:
return [
f"\n{self.observation_prefix.rstrip()}",
f"\n\t{self.observation_prefix.rstrip()}",
]
def _construct_scratchpad( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | self, intermediate_steps: List[Tuple[AgentAction, str]]
) -> Union[str, List[BaseMessage]]:
"""Construct the scratchpad that lets the agent continue its thought process."""
thoughts = ""
for action, observation in intermediate_steps:
thoughts += action.log
thoughts += f"\n{self.observation_prefix}{observation}\n{self.llm_prefix}"
return thoughts
def plan(
self,
intermediate_steps: List[Tuple[AgentAction, str]],
callbacks: Callbacks = None,
**kwargs: Any,
) -> Union[AgentAction, AgentFinish]:
"""Given input, decided what to do.
Args:
intermediate_steps: Steps the LLM has taken to date,
along with observations
callbacks: Callbacks to run.
**kwargs: User inputs.
Returns:
Action specifying what tool to use.
"""
full_inputs = self.get_full_inputs(intermediate_steps, **kwargs)
full_output = self.llm_chain.predict(callbacks=callbacks, **full_inputs)
return self.output_parser.parse(full_output)
async def aplan( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | self,
intermediate_steps: List[Tuple[AgentAction, str]],
callbacks: Callbacks = None,
**kwargs: Any,
) -> Union[AgentAction, AgentFinish]:
"""Given input, decided what to do.
Args:
intermediate_steps: Steps the LLM has taken to date,
along with observations
callbacks: Callbacks to run.
**kwargs: User inputs.
Returns:
Action specifying what tool to use.
"""
full_inputs = self.get_full_inputs(intermediate_steps, **kwargs)
full_output = await self.llm_chain.apredict(callbacks=callbacks, **full_inputs)
return self.output_parser.parse(full_output)
def get_full_inputs(
self, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any
) -> Dict[str, Any]:
"""Create the full inputs for the LLMChain from intermediate steps."""
thoughts = self._construct_scratchpad(intermediate_steps)
new_inputs = {"agent_scratchpad": thoughts, "stop": self._stop}
full_inputs = {**kwargs, **new_inputs}
return full_inputs
@property
def input_keys(self) -> List[str]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | """Return the input keys.
:meta private:
"""
return list(set(self.llm_chain.input_keys) - {"agent_scratchpad"})
@root_validator()
def validate_prompt(cls, values: Dict) -> Dict:
"""Validate that prompt matches format."""
prompt = values["llm_chain"].prompt
if "agent_scratchpad" not in prompt.input_variables:
logger.warning(
"`agent_scratchpad` should be a variable in prompt.input_variables."
" Did not find it, so adding it at the end."
)
prompt.input_variables.append("agent_scratchpad")
if isinstance(prompt, PromptTemplate):
prompt.template += "\n{agent_scratchpad}"
elif isinstance(prompt, FewShotPromptTemplate):
prompt.suffix += "\n{agent_scratchpad}"
else:
raise ValueError(f"Got unexpected prompt type {type(prompt)}")
return values
@property
@abstractmethod
def observation_prefix(self) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | """Prefix to append the observation with."""
@property
@abstractmethod
def llm_prefix(self) -> str:
"""Prefix to append the LLM call with."""
@classmethod
@abstractmethod
def create_prompt(cls, tools: Sequence[BaseTool]) -> BasePromptTemplate:
"""Create a prompt for this class."""
@classmethod
def _validate_tools(cls, tools: Sequence[BaseTool]) -> None:
"""Validate that appropriate tools are passed in."""
pass
@classmethod
@abstractmethod
def _get_default_output_parser(cls, **kwargs: Any) -> AgentOutputParser: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | """Get default output parser for this class."""
@classmethod
def from_llm_and_tools(
cls,
llm: BaseLanguageModel,
tools: Sequence[BaseTool],
callback_manager: Optional[BaseCallbackManager] = None,
output_parser: Optional[AgentOutputParser] = None,
**kwargs: Any,
) -> Agent:
"""Construct an agent from an LLM and tools."""
cls._validate_tools(tools)
llm_chain = LLMChain(
llm=llm,
prompt=cls.create_prompt(tools),
callback_manager=callback_manager,
)
tool_names = [tool.name for tool in tools]
_output_parser = output_parser or cls._get_default_output_parser()
return cls(
llm_chain=llm_chain,
allowed_tools=tool_names,
output_parser=_output_parser,
**kwargs,
)
def return_stopped_response( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | self,
early_stopping_method: str,
intermediate_steps: List[Tuple[AgentAction, str]],
**kwargs: Any,
) -> AgentFinish:
"""Return response when agent has been stopped due to max iterations."""
if early_stopping_method == "force":
return AgentFinish(
{"output": "Agent stopped due to iteration limit or time limit."}, ""
)
elif early_stopping_method == "generate":
thoughts = ""
for action, observation in intermediate_steps:
thoughts += action.log |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | thoughts += (
f"\n{self.observation_prefix}{observation}\n{self.llm_prefix}"
)
thoughts += (
"\n\nI now need to return a final answer based on the previous steps:"
)
new_inputs = {"agent_scratchpad": thoughts, "stop": self._stop}
full_inputs = {**kwargs, **new_inputs}
full_output = self.llm_chain.predict(**full_inputs)
parsed_output = self.output_parser.parse(full_output)
if isinstance(parsed_output, AgentFinish):
return parsed_output
else:
return AgentFinish({"output": full_output}, full_output)
else:
raise ValueError(
"early_stopping_method should be one of `force` or `generate`, "
f"got {early_stopping_method}"
)
def tool_run_logging_kwargs(self) -> Dict:
return {
"llm_prefix": self.llm_prefix,
"observation_prefix": self.observation_prefix,
}
class ExceptionTool(BaseTool): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | name = "_Exception"
description = "Exception tool"
def _run(
self,
query: str,
run_manager: Optional[CallbackManagerForToolRun] = None,
) -> str:
return query
async def _arun(
self,
query: str,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
) -> str:
return query
class AgentExecutor(Chain):
"""Consists of an agent using tools."""
agent: Union[BaseSingleActionAgent, BaseMultiActionAgent]
"""The agent to run for creating a plan and determining actions
to take at each step of the execution loop."""
tools: Sequence[BaseTool]
"""The valid tools the agent can call."""
return_intermediate_steps: bool = False
"""Whether to return the agent's trajectory of intermediate steps
at the end in addition to the final output."""
max_iterations: Optional[int] = 15
"""The maximum number of steps to take before ending the execution |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | loop.
Setting to 'None' could lead to an infinite loop."""
max_execution_time: Optional[float] = None
"""The maximum amount of wall clock time to spend in the execution
loop.
"""
early_stopping_method: str = "force"
"""The method to use for early stopping if the agent never
returns `AgentFinish`. Either 'force' or 'generate'.
`"force"` returns a string saying that it stopped because it met a
time or iteration limit.
`"generate"` calls the agent's LLM Chain one final time to generate
a final answer based on the previous steps.
"""
handle_parsing_errors: Union[
bool, str, Callable[[OutputParserException], str]
] = False
"""How to handle errors raised by the agent's output parser.
Defaults to `False`, which raises the error.
s
If `true`, the error will be sent back to the LLM as an observation.
If a string, the string itself will be sent to the LLM as an observation.
If a callable function, the function will be called with the exception
as an argument, and the result of that function will be passed to the agent
as an observation.
"""
@classmethod
def from_agent_and_tools( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | cls,
agent: Union[BaseSingleActionAgent, BaseMultiActionAgent],
tools: Sequence[BaseTool],
callback_manager: Optional[BaseCallbackManager] = None,
**kwargs: Any,
) -> AgentExecutor:
"""Create from agent and tools."""
return cls(
agent=agent, tools=tools, callback_manager=callback_manager, **kwargs
)
@root_validator()
def validate_tools(cls, values: Dict) -> Dict:
"""Validate that tools are compatible with agent."""
agent = values["agent"]
tools = values["tools"]
allowed_tools = agent.get_allowed_tools()
if allowed_tools is not None:
if set(allowed_tools) != set([tool.name for tool in tools]):
raise ValueError(
f"Allowed tools ({allowed_tools}) different than "
f"provided tools ({[tool.name for tool in tools]})"
)
return values
@root_validator()
def validate_return_direct_tool(cls, values: Dict) -> Dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | """Validate that tools are compatible with agent."""
agent = values["agent"]
tools = values["tools"]
if isinstance(agent, BaseMultiActionAgent):
for tool in tools:
if tool.return_direct:
raise ValueError(
"Tools that have `return_direct=True` are not allowed "
"in multi-action agents"
)
return values
def save(self, file_path: Union[Path, str]) -> None:
"""Raise error - saving not supported for Agent Executors."""
raise ValueError(
"Saving not supported for agent executors. "
"If you are trying to save the agent, please use the "
"`.save_agent(...)`"
)
def save_agent(self, file_path: Union[Path, str]) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | """Save the underlying agent."""
return self.agent.save(file_path)
@property
def input_keys(self) -> List[str]:
"""Return the input keys.
:meta private:
"""
return self.agent.input_keys
@property
def output_keys(self) -> List[str]:
"""Return the singular output key.
:meta private:
"""
if self.return_intermediate_steps:
return self.agent.return_values + ["intermediate_steps"]
else:
return self.agent.return_values
def lookup_tool(self, name: str) -> BaseTool:
"""Lookup tool by name."""
return {tool.name: tool for tool in self.tools}[name]
def _should_continue(self, iterations: int, time_elapsed: float) -> bool:
if self.max_iterations is not None and iterations >= self.max_iterations:
return False
if (
self.max_execution_time is not None
and time_elapsed >= self.max_execution_time
):
return False
return True
def _return( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | self,
output: AgentFinish,
intermediate_steps: list,
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
if run_manager:
run_manager.on_agent_finish(output, color="green", verbose=self.verbose)
final_output = output.return_values
if self.return_intermediate_steps:
final_output["intermediate_steps"] = intermediate_steps
return final_output
async def _areturn(
self,
output: AgentFinish,
intermediate_steps: list,
run_manager: Optional[AsyncCallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
if run_manager:
await run_manager.on_agent_finish(
output, color="green", verbose=self.verbose
)
final_output = output.return_values
if self.return_intermediate_steps:
final_output["intermediate_steps"] = intermediate_steps
return final_output
def _take_next_step( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | self,
name_to_tool_map: Dict[str, BaseTool],
color_mapping: Dict[str, str],
inputs: Dict[str, str],
intermediate_steps: List[Tuple[AgentAction, str]],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
"""Take a single step in the thought-action-observation loop.
Override this to take control of how the agent makes and acts on choices.
"""
try:
output = self.agent.plan(
intermediate_steps,
callbacks=run_manager.get_child() if run_manager else None,
**inputs,
)
except OutputParserException as e:
if isinstance(self.handle_parsing_errors, bool):
raise_error = not self.handle_parsing_errors
else:
raise_error = False
if raise_error:
raise e
text = str(e)
if isinstance(self.handle_parsing_errors, bool):
if e.send_to_llm:
observation = str(e.observation)
text = str(e.llm_output) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | else:
observation = "Invalid or incomplete response"
elif isinstance(self.handle_parsing_errors, str):
observation = self.handle_parsing_errors
elif callable(self.handle_parsing_errors):
observation = self.handle_parsing_errors(e)
else:
raise ValueError("Got unexpected type of `handle_parsing_errors`")
output = AgentAction("_Exception", observation, text)
if run_manager:
run_manager.on_agent_action(output, color="green")
tool_run_kwargs = self.agent.tool_run_logging_kwargs()
observation = ExceptionTool().run(
output.tool_input,
verbose=self.verbose,
color=None,
callbacks=run_manager.get_child() if run_manager else None,
**tool_run_kwargs,
)
return [(output, observation)]
if isinstance(output, AgentFinish):
return output
actions: List[AgentAction]
if isinstance(output, AgentAction):
actions = [output]
else:
actions = output
result = []
for agent_action in actions: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | if run_manager:
run_manager.on_agent_action(agent_action, color="green")
if agent_action.tool in name_to_tool_map:
tool = name_to_tool_map[agent_action.tool]
return_direct = tool.return_direct
color = color_mapping[agent_action.tool]
tool_run_kwargs = self.agent.tool_run_logging_kwargs()
if return_direct:
tool_run_kwargs["llm_prefix"] = ""
observation = tool.run(
agent_action.tool_input,
verbose=self.verbose,
color=color,
callbacks=run_manager.get_child() if run_manager else None,
**tool_run_kwargs,
)
else:
tool_run_kwargs = self.agent.tool_run_logging_kwargs()
observation = InvalidTool().run(
agent_action.tool,
verbose=self.verbose,
color=None,
callbacks=run_manager.get_child() if run_manager else None,
**tool_run_kwargs,
)
result.append((agent_action, observation))
return result
async def _atake_next_step( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | self,
name_to_tool_map: Dict[str, BaseTool],
color_mapping: Dict[str, str],
inputs: Dict[str, str],
intermediate_steps: List[Tuple[AgentAction, str]],
run_manager: Optional[AsyncCallbackManagerForChainRun] = None,
) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
"""Take a single step in the thought-action-observation loop.
Override this to take control of how the agent makes and acts on choices.
"""
try:
output = await self.agent.aplan(
intermediate_steps,
callbacks=run_manager.get_child() if run_manager else None,
**inputs,
)
except OutputParserException as e:
if isinstance(self.handle_parsing_errors, bool):
raise_error = not self.handle_parsing_errors
else:
raise_error = False
if raise_error:
raise e
text = str(e)
if isinstance(self.handle_parsing_errors, bool): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | if e.send_to_llm:
observation = str(e.observation)
text = str(e.llm_output)
else:
observation = "Invalid or incomplete response"
elif isinstance(self.handle_parsing_errors, str):
observation = self.handle_parsing_errors
elif callable(self.handle_parsing_errors):
observation = self.handle_parsing_errors(e)
else:
raise ValueError("Got unexpected type of `handle_parsing_errors`")
output = AgentAction("_Exception", observation, text)
tool_run_kwargs = self.agent.tool_run_logging_kwargs()
observation = await ExceptionTool().arun(
output.tool_input,
verbose=self.verbose,
color=None,
callbacks=run_manager.get_child() if run_manager else None,
**tool_run_kwargs,
)
return [(output, observation)]
if isinstance(output, AgentFinish):
return output
actions: List[AgentAction]
if isinstance(output, AgentAction):
actions = [output]
else:
actions = output
async def _aperform_agent_action( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | agent_action: AgentAction,
) -> Tuple[AgentAction, str]:
if run_manager:
await run_manager.on_agent_action(
agent_action, verbose=self.verbose, color="green"
)
if agent_action.tool in name_to_tool_map:
tool = name_to_tool_map[agent_action.tool]
return_direct = tool.return_direct
color = color_mapping[agent_action.tool]
tool_run_kwargs = self.agent.tool_run_logging_kwargs()
if return_direct:
tool_run_kwargs["llm_prefix"] = ""
observation = await tool.arun(
agent_action.tool_input,
verbose=self.verbose,
color=color,
callbacks=run_manager.get_child() if run_manager else None,
**tool_run_kwargs,
)
else: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | tool_run_kwargs = self.agent.tool_run_logging_kwargs()
observation = await InvalidTool().arun(
agent_action.tool,
verbose=self.verbose,
color=None,
callbacks=run_manager.get_child() if run_manager else None,
**tool_run_kwargs,
)
return agent_action, observation
result = await asyncio.gather(
*[_aperform_agent_action(agent_action) for agent_action in actions]
)
return list(result)
def _call(
self,
inputs: Dict[str, str],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
"""Run text through and get agent response."""
name_to_tool_map = {tool.name: tool for tool in self.tools}
color_mapping = get_color_mapping(
[tool.name for tool in self.tools], excluded_colors=["green", "red"]
)
intermediate_steps: List[Tuple[AgentAction, str]] = []
iterations = 0
time_elapsed = 0.0 |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | start_time = time.time()
while self._should_continue(iterations, time_elapsed):
next_step_output = self._take_next_step(
name_to_tool_map,
color_mapping,
inputs,
intermediate_steps,
run_manager=run_manager,
)
if isinstance(next_step_output, AgentFinish):
return self._return(
next_step_output, intermediate_steps, run_manager=run_manager
)
intermediate_steps.extend(next_step_output)
if len(next_step_output) == 1:
next_step_action = next_step_output[0]
tool_return = self._get_tool_return(next_step_action)
if tool_return is not None:
return self._return(
tool_return, intermediate_steps, run_manager=run_manager
)
iterations += 1
time_elapsed = time.time() - start_time
output = self.agent.return_stopped_response(
self.early_stopping_method, intermediate_steps, **inputs
)
return self._return(output, intermediate_steps, run_manager=run_manager)
async def _acall( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | self,
inputs: Dict[str, str],
run_manager: Optional[AsyncCallbackManagerForChainRun] = None,
) -> Dict[str, str]:
"""Run text through and get agent response."""
name_to_tool_map = {tool.name: tool for tool in self.tools}
color_mapping = get_color_mapping(
[tool.name for tool in self.tools], excluded_colors=["green"]
)
intermediate_steps: List[Tuple[AgentAction, str]] = []
iterations = 0
time_elapsed = 0.0
start_time = time.time()
async with asyncio_timeout(self.max_execution_time):
try:
while self._should_continue(iterations, time_elapsed):
next_step_output = await self._atake_next_step(
name_to_tool_map,
color_mapping,
inputs,
intermediate_steps,
run_manager=run_manager,
)
if isinstance(next_step_output, AgentFinish):
return await self._areturn( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | next_step_output,
intermediate_steps,
run_manager=run_manager,
)
intermediate_steps.extend(next_step_output)
if len(next_step_output) == 1:
next_step_action = next_step_output[0]
tool_return = self._get_tool_return(next_step_action)
if tool_return is not None:
return await self._areturn(
tool_return, intermediate_steps, run_manager=run_manager
)
iterations += 1
time_elapsed = time.time() - start_time
output = self.agent.return_stopped_response(
self.early_stopping_method, intermediate_steps, **inputs
)
return await self._areturn(
output, intermediate_steps, run_manager=run_manager
)
except TimeoutError:
output = self.agent.return_stopped_response(
self.early_stopping_method, intermediate_steps, **inputs
)
return await self._areturn(
output, intermediate_steps, run_manager=run_manager
)
def _get_tool_return( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,370 | Sliding window of intermediate actions for agents | ### Feature request
Allow tweaking with the history window / intermediate actions that are being sent to the LLM:
* Send a sliding window if N last actions
* Only send a specific snapshot (can be useful for code generation tasks - for example where the agent needs to perfect the code until it works).
### Motivation
Currently, agents use the entire length of intermediate actions whenever they call the LLM.
This means that long-running agents can quickly reach the token limit.
### Your contribution
I'm willing to write a PR for this if the feature makes sense for the community | https://github.com/langchain-ai/langchain/issues/6370 | https://github.com/langchain-ai/langchain/pull/6476 | 92ef77da3523f051cf17a854b2e5c2c767bbf64f | a8bbfb2da3f8c28869b12c8a9bb21209b0d03089 | "2023-06-18T15:56:26Z" | python | "2023-07-13T06:09:25Z" | langchain/agents/agent.py | self, next_step_output: Tuple[AgentAction, str]
) -> Optional[AgentFinish]:
"""Check if the tool is a returning tool."""
agent_action, observation = next_step_output
name_to_tool_map = {tool.name: tool for tool in self.tools}
if agent_action.tool in name_to_tool_map:
if name_to_tool_map[agent_action.tool].return_direct:
return AgentFinish(
{self.agent.return_values[0]: observation},
"",
)
return None |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,768 | Can't use memory for an internal LLMChain inside a SequentialChain | ### System Info
Langchain 0.0.214
Python 3.11.1
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a `SequentialChain` that contains 2 `LLMChain`s, and add a memory to the first one.
2. When running, you'll get a validation error:
```
Missing required input keys: {'chat_history'}, only had {'human_input'} (type=value_error)
```
### Expected behavior
You should be able to add memory to one chain, not just the Sequential Chain | https://github.com/langchain-ai/langchain/issues/6768 | https://github.com/langchain-ai/langchain/pull/6769 | 488d2d5da95a2bacdca3d1623d862ac5ab28d59e | f307ca094b0d175d71ac424eba3d9f7ef5fc44f1 | "2023-06-26T16:09:11Z" | python | "2023-07-13T06:47:44Z" | langchain/chains/sequential.py | """Chain pipeline where the outputs of one step feed directly into next."""
from typing import Any, Dict, List, Optional
from pydantic import Extra, root_validator
from langchain.callbacks.manager import (
AsyncCallbackManagerForChainRun,
CallbackManagerForChainRun,
)
from langchain.chains.base import Chain
from langchain.input import get_color_mapping
class SequentialChain(Chain):
"""Chain where the outputs of one chain feed directly into next."""
chains: List[Chain]
input_variables: List[str]
output_variables: List[str]
return_all: bool = False
class Config: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,768 | Can't use memory for an internal LLMChain inside a SequentialChain | ### System Info
Langchain 0.0.214
Python 3.11.1
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a `SequentialChain` that contains 2 `LLMChain`s, and add a memory to the first one.
2. When running, you'll get a validation error:
```
Missing required input keys: {'chat_history'}, only had {'human_input'} (type=value_error)
```
### Expected behavior
You should be able to add memory to one chain, not just the Sequential Chain | https://github.com/langchain-ai/langchain/issues/6768 | https://github.com/langchain-ai/langchain/pull/6769 | 488d2d5da95a2bacdca3d1623d862ac5ab28d59e | f307ca094b0d175d71ac424eba3d9f7ef5fc44f1 | "2023-06-26T16:09:11Z" | python | "2023-07-13T06:47:44Z" | langchain/chains/sequential.py | """Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
@property
def input_keys(self) -> List[str]:
"""Return expected input keys to the chain.
:meta private:
"""
return self.input_variables
@property
def output_keys(self) -> List[str]:
"""Return output key.
:meta private:
"""
return self.output_variables
@root_validator(pre=True)
def validate_chains(cls, values: Dict) -> Dict:
"""Validate that the correct inputs exist for all chains."""
chains = values["chains"]
input_variables = values["input_variables"]
memory_keys = list()
if "memory" in values and values["memory"] is not None:
"""Validate that prompt input variables are consistent."""
memory_keys = values["memory"].memory_variables
if set(input_variables).intersection(set(memory_keys)):
overlapping_keys = set(input_variables) & set(memory_keys)
raise ValueError(
f"The the input key(s) {''.join(overlapping_keys)} are found "
f"in the Memory keys ({memory_keys}) - please use input and " |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,768 | Can't use memory for an internal LLMChain inside a SequentialChain | ### System Info
Langchain 0.0.214
Python 3.11.1
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a `SequentialChain` that contains 2 `LLMChain`s, and add a memory to the first one.
2. When running, you'll get a validation error:
```
Missing required input keys: {'chat_history'}, only had {'human_input'} (type=value_error)
```
### Expected behavior
You should be able to add memory to one chain, not just the Sequential Chain | https://github.com/langchain-ai/langchain/issues/6768 | https://github.com/langchain-ai/langchain/pull/6769 | 488d2d5da95a2bacdca3d1623d862ac5ab28d59e | f307ca094b0d175d71ac424eba3d9f7ef5fc44f1 | "2023-06-26T16:09:11Z" | python | "2023-07-13T06:47:44Z" | langchain/chains/sequential.py | f"memory keys that don't overlap."
)
known_variables = set(input_variables + memory_keys)
for chain in chains:
missing_vars = set(chain.input_keys).difference(known_variables)
if missing_vars:
raise ValueError(
f"Missing required input keys: {missing_vars}, "
f"only had {known_variables}"
)
overlapping_keys = known_variables.intersection(chain.output_keys)
if overlapping_keys:
raise ValueError(
f"Chain returned keys that already exist: {overlapping_keys}"
)
known_variables |= set(chain.output_keys)
if "output_variables" not in values:
if values.get("return_all", False):
output_keys = known_variables.difference(input_variables)
else:
output_keys = chains[-1].output_keys
values["output_variables"] = output_keys
else:
missing_vars = set(values["output_variables"]).difference(known_variables)
if missing_vars:
raise ValueError(
f"Expected output variables that were not found: {missing_vars}."
)
return values
def _call( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,768 | Can't use memory for an internal LLMChain inside a SequentialChain | ### System Info
Langchain 0.0.214
Python 3.11.1
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a `SequentialChain` that contains 2 `LLMChain`s, and add a memory to the first one.
2. When running, you'll get a validation error:
```
Missing required input keys: {'chat_history'}, only had {'human_input'} (type=value_error)
```
### Expected behavior
You should be able to add memory to one chain, not just the Sequential Chain | https://github.com/langchain-ai/langchain/issues/6768 | https://github.com/langchain-ai/langchain/pull/6769 | 488d2d5da95a2bacdca3d1623d862ac5ab28d59e | f307ca094b0d175d71ac424eba3d9f7ef5fc44f1 | "2023-06-26T16:09:11Z" | python | "2023-07-13T06:47:44Z" | langchain/chains/sequential.py | self,
inputs: Dict[str, str],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, str]:
known_values = inputs.copy()
_run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
for i, chain in enumerate(self.chains):
callbacks = _run_manager.get_child()
outputs = chain(known_values, return_only_outputs=True, callbacks=callbacks)
known_values.update(outputs)
return {k: known_values[k] for k in self.output_variables}
async def _acall(
self,
inputs: Dict[str, Any],
run_manager: Optional[AsyncCallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
known_values = inputs.copy()
_run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()
callbacks = _run_manager.get_child()
for i, chain in enumerate(self.chains):
outputs = await chain.acall(
known_values, return_only_outputs=True, callbacks=callbacks
)
known_values.update(outputs)
return {k: known_values[k] for k in self.output_variables}
class SimpleSequentialChain(Chain): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,768 | Can't use memory for an internal LLMChain inside a SequentialChain | ### System Info
Langchain 0.0.214
Python 3.11.1
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a `SequentialChain` that contains 2 `LLMChain`s, and add a memory to the first one.
2. When running, you'll get a validation error:
```
Missing required input keys: {'chat_history'}, only had {'human_input'} (type=value_error)
```
### Expected behavior
You should be able to add memory to one chain, not just the Sequential Chain | https://github.com/langchain-ai/langchain/issues/6768 | https://github.com/langchain-ai/langchain/pull/6769 | 488d2d5da95a2bacdca3d1623d862ac5ab28d59e | f307ca094b0d175d71ac424eba3d9f7ef5fc44f1 | "2023-06-26T16:09:11Z" | python | "2023-07-13T06:47:44Z" | langchain/chains/sequential.py | """Simple chain where the outputs of one step feed directly into next."""
chains: List[Chain]
strip_outputs: bool = False
input_key: str = "input"
output_key: str = "output"
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
@property
def input_keys(self) -> List[str]:
"""Expect input key.
:meta private:
"""
return [self.input_key]
@property
def output_keys(self) -> List[str]:
"""Return output key.
:meta private:
"""
return [self.output_key]
@root_validator()
def validate_chains(cls, values: Dict) -> Dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,768 | Can't use memory for an internal LLMChain inside a SequentialChain | ### System Info
Langchain 0.0.214
Python 3.11.1
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a `SequentialChain` that contains 2 `LLMChain`s, and add a memory to the first one.
2. When running, you'll get a validation error:
```
Missing required input keys: {'chat_history'}, only had {'human_input'} (type=value_error)
```
### Expected behavior
You should be able to add memory to one chain, not just the Sequential Chain | https://github.com/langchain-ai/langchain/issues/6768 | https://github.com/langchain-ai/langchain/pull/6769 | 488d2d5da95a2bacdca3d1623d862ac5ab28d59e | f307ca094b0d175d71ac424eba3d9f7ef5fc44f1 | "2023-06-26T16:09:11Z" | python | "2023-07-13T06:47:44Z" | langchain/chains/sequential.py | """Validate that chains are all single input/output."""
for chain in values["chains"]:
if len(chain.input_keys) != 1:
raise ValueError(
"Chains used in SimplePipeline should all have one input, got "
f"{chain} with {len(chain.input_keys)} inputs."
)
if len(chain.output_keys) != 1:
raise ValueError(
"Chains used in SimplePipeline should all have one output, got "
f"{chain} with {len(chain.output_keys)} outputs."
)
return values
def _call(
self,
inputs: Dict[str, str],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, str]:
_run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
_input = inputs[self.input_key]
color_mapping = get_color_mapping([str(i) for i in range(len(self.chains))])
for i, chain in enumerate(self.chains):
_input = chain.run(_input, callbacks=_run_manager.get_child(f"step_{i+1}"))
if self.strip_outputs:
_input = _input.strip()
_run_manager.on_text(
_input, color=color_mapping[str(i)], end="\n", verbose=self.verbose
)
return {self.output_key: _input}
async def _acall( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,768 | Can't use memory for an internal LLMChain inside a SequentialChain | ### System Info
Langchain 0.0.214
Python 3.11.1
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a `SequentialChain` that contains 2 `LLMChain`s, and add a memory to the first one.
2. When running, you'll get a validation error:
```
Missing required input keys: {'chat_history'}, only had {'human_input'} (type=value_error)
```
### Expected behavior
You should be able to add memory to one chain, not just the Sequential Chain | https://github.com/langchain-ai/langchain/issues/6768 | https://github.com/langchain-ai/langchain/pull/6769 | 488d2d5da95a2bacdca3d1623d862ac5ab28d59e | f307ca094b0d175d71ac424eba3d9f7ef5fc44f1 | "2023-06-26T16:09:11Z" | python | "2023-07-13T06:47:44Z" | langchain/chains/sequential.py | self,
inputs: Dict[str, Any],
run_manager: Optional[AsyncCallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
_run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()
callbacks = _run_manager.get_child()
_input = inputs[self.input_key]
color_mapping = get_color_mapping([str(i) for i in range(len(self.chains))])
for i, chain in enumerate(self.chains):
_input = await chain.arun(_input, callbacks=callbacks)
if self.strip_outputs:
_input = _input.strip()
await _run_manager.on_text(
_input, color=color_mapping[str(i)], end="\n", verbose=self.verbose
)
return {self.output_key: _input} |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,768 | Can't use memory for an internal LLMChain inside a SequentialChain | ### System Info
Langchain 0.0.214
Python 3.11.1
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a `SequentialChain` that contains 2 `LLMChain`s, and add a memory to the first one.
2. When running, you'll get a validation error:
```
Missing required input keys: {'chat_history'}, only had {'human_input'} (type=value_error)
```
### Expected behavior
You should be able to add memory to one chain, not just the Sequential Chain | https://github.com/langchain-ai/langchain/issues/6768 | https://github.com/langchain-ai/langchain/pull/6769 | 488d2d5da95a2bacdca3d1623d862ac5ab28d59e | f307ca094b0d175d71ac424eba3d9f7ef5fc44f1 | "2023-06-26T16:09:11Z" | python | "2023-07-13T06:47:44Z" | tests/unit_tests/chains/test_sequential.py | """Test pipeline functionality."""
from typing import Dict, List, Optional
import pytest
from langchain.callbacks.manager import CallbackManagerForChainRun
from langchain.chains.base import Chain
from langchain.chains.sequential import SequentialChain, SimpleSequentialChain
from langchain.memory.simple import SimpleMemory
class FakeChain(Chain):
"""Fake Chain for testing purposes."""
input_variables: List[str]
output_variables: List[str]
@property
def input_keys(self) -> List[str]:
"""Input keys this chain returns."""
return self.input_variables
@property
def output_keys(self) -> List[str]:
"""Input keys this chain returns."""
return self.output_variables
def _call( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,768 | Can't use memory for an internal LLMChain inside a SequentialChain | ### System Info
Langchain 0.0.214
Python 3.11.1
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a `SequentialChain` that contains 2 `LLMChain`s, and add a memory to the first one.
2. When running, you'll get a validation error:
```
Missing required input keys: {'chat_history'}, only had {'human_input'} (type=value_error)
```
### Expected behavior
You should be able to add memory to one chain, not just the Sequential Chain | https://github.com/langchain-ai/langchain/issues/6768 | https://github.com/langchain-ai/langchain/pull/6769 | 488d2d5da95a2bacdca3d1623d862ac5ab28d59e | f307ca094b0d175d71ac424eba3d9f7ef5fc44f1 | "2023-06-26T16:09:11Z" | python | "2023-07-13T06:47:44Z" | tests/unit_tests/chains/test_sequential.py | self,
inputs: Dict[str, str],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, str]:
outputs = {}
for var in self.output_variables:
variables = [inputs[k] for k in self.input_variables]
outputs[var] = f"{' '.join(variables)}foo"
return outputs
def test_sequential_usage_single_inputs() -> None:
"""Test sequential on single input chains."""
chain_1 = FakeChain(input_variables=["foo"], output_variables=["bar"])
chain_2 = FakeChain(input_variables=["bar"], output_variables=["baz"])
chain = SequentialChain(chains=[chain_1, chain_2], input_variables=["foo"])
output = chain({"foo": "123"})
expected_output = {"baz": "123foofoo", "foo": "123"}
assert output == expected_output
def test_sequential_usage_multiple_inputs() -> None:
"""Test sequential on multiple input chains."""
chain_1 = FakeChain(input_variables=["foo", "test"], output_variables=["bar"])
chain_2 = FakeChain(input_variables=["bar", "foo"], output_variables=["baz"])
chain = SequentialChain(chains=[chain_1, chain_2], input_variables=["foo", "test"])
output = chain({"foo": "123", "test": "456"})
expected_output = {
"baz": "123 456foo 123foo",
"foo": "123",
"test": "456",
}
assert output == expected_output
def test_sequential_usage_memory() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,768 | Can't use memory for an internal LLMChain inside a SequentialChain | ### System Info
Langchain 0.0.214
Python 3.11.1
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a `SequentialChain` that contains 2 `LLMChain`s, and add a memory to the first one.
2. When running, you'll get a validation error:
```
Missing required input keys: {'chat_history'}, only had {'human_input'} (type=value_error)
```
### Expected behavior
You should be able to add memory to one chain, not just the Sequential Chain | https://github.com/langchain-ai/langchain/issues/6768 | https://github.com/langchain-ai/langchain/pull/6769 | 488d2d5da95a2bacdca3d1623d862ac5ab28d59e | f307ca094b0d175d71ac424eba3d9f7ef5fc44f1 | "2023-06-26T16:09:11Z" | python | "2023-07-13T06:47:44Z" | tests/unit_tests/chains/test_sequential.py | """Test sequential usage with memory."""
memory = SimpleMemory(memories={"zab": "rab"})
chain_1 = FakeChain(input_variables=["foo"], output_variables=["bar"])
chain_2 = FakeChain(input_variables=["bar"], output_variables=["baz"])
chain = SequentialChain(
memory=memory, chains=[chain_1, chain_2], input_variables=["foo"]
)
output = chain({"foo": "123"})
expected_output = {"baz": "123foofoo", "foo": "123", "zab": "rab"}
assert output == expected_output
memory = SimpleMemory(memories={"zab": "rab", "foo": "rab"})
chain_1 = FakeChain(input_variables=["foo"], output_variables=["bar"])
chain_2 = FakeChain(input_variables=["bar"], output_variables=["baz"])
with pytest.raises(ValueError):
SequentialChain(
memory=memory, chains=[chain_1, chain_2], input_variables=["foo"]
)
def test_sequential_usage_multiple_outputs() -> None:
"""Test sequential usage on multiple output chains."""
chain_1 = FakeChain(input_variables=["foo"], output_variables=["bar", "test"])
chain_2 = FakeChain(input_variables=["bar", "foo"], output_variables=["baz"])
chain = SequentialChain(chains=[chain_1, chain_2], input_variables=["foo"])
output = chain({"foo": "123"})
expected_output = {
"baz": "123foo 123foo",
"foo": "123",
}
assert output == expected_output
def test_sequential_missing_inputs() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,768 | Can't use memory for an internal LLMChain inside a SequentialChain | ### System Info
Langchain 0.0.214
Python 3.11.1
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a `SequentialChain` that contains 2 `LLMChain`s, and add a memory to the first one.
2. When running, you'll get a validation error:
```
Missing required input keys: {'chat_history'}, only had {'human_input'} (type=value_error)
```
### Expected behavior
You should be able to add memory to one chain, not just the Sequential Chain | https://github.com/langchain-ai/langchain/issues/6768 | https://github.com/langchain-ai/langchain/pull/6769 | 488d2d5da95a2bacdca3d1623d862ac5ab28d59e | f307ca094b0d175d71ac424eba3d9f7ef5fc44f1 | "2023-06-26T16:09:11Z" | python | "2023-07-13T06:47:44Z" | tests/unit_tests/chains/test_sequential.py | """Test error is raised when input variables are missing."""
chain_1 = FakeChain(input_variables=["foo"], output_variables=["bar"])
chain_2 = FakeChain(input_variables=["bar", "test"], output_variables=["baz"])
with pytest.raises(ValueError):
SequentialChain(chains=[chain_1, chain_2], input_variables=["foo"])
def test_sequential_bad_outputs() -> None:
"""Test error is raised when bad outputs are specified."""
chain_1 = FakeChain(input_variables=["foo"], output_variables=["bar"])
chain_2 = FakeChain(input_variables=["bar"], output_variables=["baz"])
with pytest.raises(ValueError):
SequentialChain(
chains=[chain_1, chain_2],
input_variables=["foo"],
output_variables=["test"],
)
def test_sequential_valid_outputs() -> None:
"""Test chain runs when valid outputs are specified."""
chain_1 = FakeChain(input_variables=["foo"], output_variables=["bar"])
chain_2 = FakeChain(input_variables=["bar"], output_variables=["baz"])
chain = SequentialChain(
chains=[chain_1, chain_2],
input_variables=["foo"],
output_variables=["bar", "baz"],
)
output = chain({"foo": "123"}, return_only_outputs=True)
expected_output = {"baz": "123foofoo", "bar": "123foo"}
assert output == expected_output
def test_sequential_overlapping_inputs() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,768 | Can't use memory for an internal LLMChain inside a SequentialChain | ### System Info
Langchain 0.0.214
Python 3.11.1
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a `SequentialChain` that contains 2 `LLMChain`s, and add a memory to the first one.
2. When running, you'll get a validation error:
```
Missing required input keys: {'chat_history'}, only had {'human_input'} (type=value_error)
```
### Expected behavior
You should be able to add memory to one chain, not just the Sequential Chain | https://github.com/langchain-ai/langchain/issues/6768 | https://github.com/langchain-ai/langchain/pull/6769 | 488d2d5da95a2bacdca3d1623d862ac5ab28d59e | f307ca094b0d175d71ac424eba3d9f7ef5fc44f1 | "2023-06-26T16:09:11Z" | python | "2023-07-13T06:47:44Z" | tests/unit_tests/chains/test_sequential.py | """Test error is raised when input variables are overlapping."""
chain_1 = FakeChain(input_variables=["foo"], output_variables=["bar", "test"])
chain_2 = FakeChain(input_variables=["bar"], output_variables=["baz"])
with pytest.raises(ValueError):
SequentialChain(chains=[chain_1, chain_2], input_variables=["foo", "test"])
def test_simple_sequential_functionality() -> None:
"""Test simple sequential functionality."""
chain_1 = FakeChain(input_variables=["foo"], output_variables=["bar"])
chain_2 = FakeChain(input_variables=["bar"], output_variables=["baz"])
chain = SimpleSequentialChain(chains=[chain_1, chain_2])
output = chain({"input": "123"})
expected_output = {"output": "123foofoo", "input": "123"}
assert output == expected_output
def test_multi_input_errors() -> None:
"""Test simple sequential errors if multiple input variables are expected."""
chain_1 = FakeChain(input_variables=["foo"], output_variables=["bar"])
chain_2 = FakeChain(input_variables=["bar", "foo"], output_variables=["baz"])
with pytest.raises(ValueError):
SimpleSequentialChain(chains=[chain_1, chain_2])
def test_multi_output_errors() -> None:
"""Test simple sequential errors if multiple output variables are expected."""
chain_1 = FakeChain(input_variables=["foo"], output_variables=["bar", "grok"])
chain_2 = FakeChain(input_variables=["bar"], output_variables=["baz"])
with pytest.raises(ValueError):
SimpleSequentialChain(chains=[chain_1, chain_2]) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,231 | correct the base class | https://github.com/hwchase17/langchain/blob/c7db9febb0edeba1ea108adc4423b789404ce5f2/langchain/experimental/plan_and_execute/schema.py#L31
From
`class ListStepContainer(BaseModel):`
To
`class ListStepContainer(BaseStepContainer):`
| https://github.com/langchain-ai/langchain/issues/6231 | https://github.com/langchain-ai/langchain/pull/6232 | 98e1bbfbbdffca55775e847899d2823f6232ebe7 | af3f4010155a882b8b1021b6e0de130c628dab2c | "2023-06-15T15:16:56Z" | python | "2023-07-13T07:03:02Z" | langchain/experimental/plan_and_execute/schema.py | from abc import abstractmethod
from typing import List, Tuple
from pydantic import BaseModel, Field
from langchain.schema import BaseOutputParser
class Step(BaseModel):
value: str
class Plan(BaseModel):
steps: List[Step]
class StepResponse(BaseModel):
response: str
class BaseStepContainer(BaseModel):
@abstractmethod
def add_step(self, step: Step, step_response: StepResponse) -> None:
"""Add step and step response to the container."""
@abstractmethod
def get_final_response(self) -> str:
"""Return the final response based on steps taken."""
class ListStepContainer(BaseModel):
steps: List[Tuple[Step, StepResponse]] = Field(default_factory=list)
def add_step(self, step: Step, step_response: StepResponse) -> None:
self.steps.append((step, step_response))
def get_steps(self) -> List[Tuple[Step, StepResponse]]:
return self.steps
def get_final_response(self) -> str:
return self.steps[-1][1].response
class PlanOutputParser(BaseOutputParser):
@abstractmethod
def parse(self, text: str) -> Plan:
"""Parse into a plan.""" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | langchain/cache.py | """Beta Feature: base interface for cache."""
from __future__ import annotations
import hashlib
import inspect
import json
import logging
from abc import ABC, abstractmethod
from datetime import timedelta
from typing import (
TYPE_CHECKING,
Any,
Callable,
Dict,
Optional,
Sequence,
Tuple,
Type,
Union,
cast, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | langchain/cache.py | )
from sqlalchemy import Column, Integer, String, create_engine, select
from sqlalchemy.engine.base import Engine
from sqlalchemy.orm import Session
from langchain.utils import get_from_env
try:
from sqlalchemy.orm import declarative_base
except ImportError:
from sqlalchemy.ext.declarative import declarative_base
from langchain.embeddings.base import Embeddings
from langchain.load.dump import dumps
from langchain.load.load import loads
from langchain.schema import Generation
from langchain.vectorstores.redis import Redis as RedisVectorstore
logger = logging.getLogger(__file__)
if TYPE_CHECKING:
import momento
RETURN_VAL_TYPE = Sequence[Generation]
def _hash(_input: str) -> str:
"""Use a deterministic hashing approach."""
return hashlib.md5(_input.encode()).hexdigest()
def _dump_generations_to_json(generations: RETURN_VAL_TYPE) -> str:
"""Dump generations to json.
Args:
generations (RETURN_VAL_TYPE): A list of language model generations.
Returns:
str: Json representing a list of generations.
"""
return json.dumps([generation.dict() for generation in generations])
def _load_generations_from_json(generations_json: str) -> RETURN_VAL_TYPE: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | langchain/cache.py | """Load generations from json.
Args:
generations_json (str): A string of json representing a list of generations.
Raises:
ValueError: Could not decode json string to list of generations.
Returns:
RETURN_VAL_TYPE: A list of generations.
"""
try:
results = json.loads(generations_json)
return [Generation(**generation_dict) for generation_dict in results]
except json.JSONDecodeError:
raise ValueError(
f"Could not decode json to list of generations: {generations_json}"
)
class BaseCache(ABC):
"""Base interface for cache."""
@abstractmethod
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
"""Look up based on prompt and llm_string."""
@abstractmethod
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
"""Update cache based on prompt and llm_string."""
@abstractmethod
def clear(self, **kwargs: Any) -> None:
"""Clear cache that can take additional keyword arguments."""
class InMemoryCache(BaseCache): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | langchain/cache.py | """Cache that stores things in memory."""
def __init__(self) -> None:
"""Initialize with empty cache."""
self._cache: Dict[Tuple[str, str], RETURN_VAL_TYPE] = {}
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
"""Look up based on prompt and llm_string."""
return self._cache.get((prompt, llm_string), None)
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
"""Update cache based on prompt and llm_string."""
self._cache[(prompt, llm_string)] = return_val
def clear(self, **kwargs: Any) -> None:
"""Clear cache."""
self._cache = {}
Base = declarative_base()
class FullLLMCache(Base):
"""SQLite table for full LLM Cache (all generations)."""
__tablename__ = "full_llm_cache"
prompt = Column(String, primary_key=True)
llm = Column(String, primary_key=True)
idx = Column(Integer, primary_key=True)
response = Column(String)
class SQLAlchemyCache(BaseCache):
"""Cache that uses SQAlchemy as a backend."""
def __init__(self, engine: Engine, cache_schema: Type[FullLLMCache] = FullLLMCache): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | langchain/cache.py | """Initialize by creating all tables."""
self.engine = engine
self.cache_schema = cache_schema
self.cache_schema.metadata.create_all(self.engine)
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
"""Look up based on prompt and llm_string."""
stmt = (
select(self.cache_schema.response)
.where(self.cache_schema.prompt == prompt)
.where(self.cache_schema.llm == llm_string)
.order_by(self.cache_schema.idx)
)
with Session(self.engine) as session:
rows = session.execute(stmt).fetchall()
if rows:
try:
return [loads(row[0]) for row in rows]
except Exception:
logger.warning(
"Retrieving a cache value that could not be deserialized "
"properly. This is likely due to the cache being in an "
"older format. Please recreate your cache to avoid this "
"error."
)
return [Generation(text=row[0]) for row in rows]
return None
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | langchain/cache.py | """Update based on prompt and llm_string."""
items = [
self.cache_schema(prompt=prompt, llm=llm_string, response=dumps(gen), idx=i)
for i, gen in enumerate(return_val)
]
with Session(self.engine) as session, session.begin():
for item in items:
session.merge(item)
def clear(self, **kwargs: Any) -> None:
"""Clear cache."""
with Session(self.engine) as session:
session.query(self.cache_schema).delete()
class SQLiteCache(SQLAlchemyCache): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | langchain/cache.py | """Cache that uses SQLite as a backend."""
def __init__(self, database_path: str = ".langchain.db"):
"""Initialize by creating the engine and all tables."""
engine = create_engine(f"sqlite:///{database_path}")
super().__init__(engine)
class RedisCache(BaseCache):
"""Cache that uses Redis as a backend."""
def __init__(self, redis_: Any):
"""Initialize by passing in Redis instance."""
try:
from redis import Redis
except ImportError:
raise ValueError(
"Could not import redis python package. "
"Please install it with `pip install redis`."
)
if not isinstance(redis_, Redis):
raise ValueError("Please pass in Redis object.")
self.redis = redis_
def _key(self, prompt: str, llm_string: str) -> str:
"""Compute key from prompt and llm_string"""
return _hash(prompt + llm_string)
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | langchain/cache.py | """Look up based on prompt and llm_string."""
generations = []
results = self.redis.hgetall(self._key(prompt, llm_string))
if results:
for _, text in results.items():
generations.append(Generation(text=text))
return generations if generations else None
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
"""Update cache based on prompt and llm_string."""
for gen in return_val:
if not isinstance(gen, Generation):
raise ValueError(
"RedisCache only supports caching of normal LLM generations, "
f"got {type(gen)}"
)
key = self._key(prompt, llm_string)
self.redis.hset(
key,
mapping={
str(idx): generation.text for idx, generation in enumerate(return_val)
},
)
def clear(self, **kwargs: Any) -> None:
"""Clear cache. If `asynchronous` is True, flush asynchronously."""
asynchronous = kwargs.get("asynchronous", False)
self.redis.flushdb(asynchronous=asynchronous, **kwargs)
class RedisSemanticCache(BaseCache): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | langchain/cache.py | """Cache that uses Redis as a vector-store backend."""
def __init__(
self, redis_url: str, embedding: Embeddings, score_threshold: float = 0.2
):
"""Initialize by passing in the `init` GPTCache func
Args:
redis_url (str): URL to connect to Redis.
embedding (Embedding): Embedding provider for semantic encoding and search.
score_threshold (float, 0.2):
Example:
.. code-block:: python
import langchain
from langchain.cache import RedisSemanticCache
from langchain.embeddings import OpenAIEmbeddings
langchain.llm_cache = RedisSemanticCache(
redis_url="redis://localhost:6379",
embedding=OpenAIEmbeddings()
)
"""
self._cache_dict: Dict[str, RedisVectorstore] = {}
self.redis_url = redis_url
self.embedding = embedding
self.score_threshold = score_threshold
def _index_name(self, llm_string: str) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | langchain/cache.py | hashed_index = _hash(llm_string)
return f"cache:{hashed_index}"
def _get_llm_cache(self, llm_string: str) -> RedisVectorstore:
index_name = self._index_name(llm_string)
if index_name in self._cache_dict:
return self._cache_dict[index_name]
try:
self._cache_dict[index_name] = RedisVectorstore.from_existing_index(
embedding=self.embedding,
index_name=index_name,
redis_url=self.redis_url,
)
except ValueError:
redis = RedisVectorstore(
embedding_function=self.embedding.embed_query,
index_name=index_name,
redis_url=self.redis_url,
)
_embedding = self.embedding.embed_query(text="test")
redis._create_index(dim=len(_embedding))
self._cache_dict[index_name] = redis
return self._cache_dict[index_name]
def clear(self, **kwargs: Any) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | langchain/cache.py | """Clear semantic cache for a given llm_string."""
index_name = self._index_name(kwargs["llm_string"])
if index_name in self._cache_dict:
self._cache_dict[index_name].drop_index(
index_name=index_name, delete_documents=True, redis_url=self.redis_url
)
del self._cache_dict[index_name]
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
"""Look up based on prompt and llm_string."""
llm_cache = self._get_llm_cache(llm_string)
generations = []
results = llm_cache.similarity_search_limit_score(
query=prompt,
k=1,
score_threshold=self.score_threshold,
)
if results:
for document in results:
for text in document.metadata["return_val"]:
generations.append(Generation(text=text))
return generations if generations else None
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | langchain/cache.py | """Update cache based on prompt and llm_string."""
for gen in return_val:
if not isinstance(gen, Generation):
raise ValueError(
"RedisSemanticCache only supports caching of "
f"normal LLM generations, got {type(gen)}"
)
llm_cache = self._get_llm_cache(llm_string)
metadata = {
"llm_string": llm_string,
"prompt": prompt,
"return_val": [generation.text for generation in return_val],
}
llm_cache.add_texts(texts=[prompt], metadatas=[metadata])
class GPTCache(BaseCache): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | langchain/cache.py | """Cache that uses GPTCache as a backend."""
def __init__(
self,
init_func: Union[
Callable[[Any, str], None], Callable[[Any], None], None
] = None,
):
"""Initialize by passing in init function (default: `None`).
Args:
init_func (Optional[Callable[[Any], None]]): init `GPTCache` function
(default: `None`)
Example:
.. code-block:: python
# Initialize GPTCache with a custom init function
import gptcache
from gptcache.processor.pre import get_prompt
from gptcache.manager.factory import get_data_manager
# Avoid multiple caches using the same file,
causing different llm model caches to affect each other
def init_gptcache(cache_obj: gptcache.Cache, llm str):
cache_obj.init(
pre_embedding_func=get_prompt,
data_manager=manager_factory(
manager="map",
data_dir=f"map_cache_{llm}"
),
)
langchain.llm_cache = GPTCache(init_gptcache)
"""
try: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | langchain/cache.py | import gptcache
except ImportError:
raise ImportError(
"Could not import gptcache python package. "
"Please install it with `pip install gptcache`."
)
self.init_gptcache_func: Union[
Callable[[Any, str], None], Callable[[Any], None], None
] = init_func
self.gptcache_dict: Dict[str, Any] = {}
def _new_gptcache(self, llm_string: str) -> Any:
"""New gptcache object"""
from gptcache import Cache
from gptcache.manager.factory import get_data_manager
from gptcache.processor.pre import get_prompt
_gptcache = Cache()
if self.init_gptcache_func is not None:
sig = inspect.signature(self.init_gptcache_func)
if len(sig.parameters) == 2:
self.init_gptcache_func(_gptcache, llm_string)
else:
self.init_gptcache_func(_gptcache)
else:
_gptcache.init(
pre_embedding_func=get_prompt,
data_manager=get_data_manager(data_path=llm_string),
)
self.gptcache_dict[llm_string] = _gptcache
return _gptcache
def _get_gptcache(self, llm_string: str) -> Any: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | langchain/cache.py | """Get a cache object.
When the corresponding llm model cache does not exist, it will be created."""
return self.gptcache_dict.get(llm_string, self._new_gptcache(llm_string))
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
"""Look up the cache data.
First, retrieve the corresponding cache object using the `llm_string` parameter,
and then retrieve the data from the cache based on the `prompt`.
"""
from gptcache.adapter.api import get
_gptcache = self.gptcache_dict.get(llm_string, None)
if _gptcache is None:
return None
res = get(prompt, cache_obj=_gptcache)
if res:
return [
Generation(**generation_dict) for generation_dict in json.loads(res)
]
return None
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | langchain/cache.py | """Update cache.
First, retrieve the corresponding cache object using the `llm_string` parameter,
and then store the `prompt` and `return_val` in the cache object.
"""
for gen in return_val:
if not isinstance(gen, Generation):
raise ValueError(
"GPTCache only supports caching of normal LLM generations, "
f"got {type(gen)}"
)
from gptcache.adapter.api import put
_gptcache = self._get_gptcache(llm_string)
handled_data = json.dumps([generation.dict() for generation in return_val])
put(prompt, handled_data, cache_obj=_gptcache)
return None
def clear(self, **kwargs: Any) -> None:
"""Clear cache."""
from gptcache import Cache
for gptcache_instance in self.gptcache_dict.values():
gptcache_instance = cast(Cache, gptcache_instance)
gptcache_instance.flush()
self.gptcache_dict.clear()
def _ensure_cache_exists(cache_client: momento.CacheClient, cache_name: str) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | langchain/cache.py | """Create cache if it doesn't exist.
Raises:
SdkException: Momento service or network error
Exception: Unexpected response
"""
from momento.responses import CreateCache
create_cache_response = cache_client.create_cache(cache_name)
if isinstance(create_cache_response, CreateCache.Success) or isinstance(
create_cache_response, CreateCache.CacheAlreadyExists
):
return None
elif isinstance(create_cache_response, CreateCache.Error):
raise create_cache_response.inner_exception
else:
raise Exception(f"Unexpected response cache creation: {create_cache_response}")
def _validate_ttl(ttl: Optional[timedelta]) -> None:
if ttl is not None and ttl <= timedelta(seconds=0):
raise ValueError(f"ttl must be positive but was {ttl}.")
class MomentoCache(BaseCache):
"""Cache that uses Momento as a backend. See https://gomomento.com/"""
def __init__(
self,
cache_client: momento.CacheClient,
cache_name: str,
*,
ttl: Optional[timedelta] = None,
ensure_cache_exists: bool = True,
):
"""Instantiate a prompt cache using Momento as a backend.
Note: to instantiate the cache client passed to MomentoCache, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | langchain/cache.py | you must have a Momento account. See https://gomomento.com/.
Args:
cache_client (CacheClient): The Momento cache client.
cache_name (str): The name of the cache to use to store the data.
ttl (Optional[timedelta], optional): The time to live for the cache items.
Defaults to None, ie use the client default TTL.
ensure_cache_exists (bool, optional): Create the cache if it doesn't
exist. Defaults to True.
Raises:
ImportError: Momento python package is not installed.
TypeError: cache_client is not of type momento.CacheClientObject
ValueError: ttl is non-null and non-negative
"""
try:
from momento import CacheClient
except ImportError:
raise ImportError(
"Could not import momento python package. "
"Please install it with `pip install momento`."
)
if not isinstance(cache_client, CacheClient):
raise TypeError("cache_client must be a momento.CacheClient object.")
_validate_ttl(ttl)
if ensure_cache_exists:
_ensure_cache_exists(cache_client, cache_name)
self.cache_client = cache_client
self.cache_name = cache_name
self.ttl = ttl
@classmethod
def from_client_params( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | langchain/cache.py | cls,
cache_name: str,
ttl: timedelta,
*,
configuration: Optional[momento.config.Configuration] = None,
auth_token: Optional[str] = None,
**kwargs: Any,
) -> MomentoCache:
"""Construct cache from CacheClient parameters."""
try:
from momento import CacheClient, Configurations, CredentialProvider
except ImportError:
raise ImportError(
"Could not import momento python package. "
"Please install it with `pip install momento`."
)
if configuration is None:
configuration = Configurations.Laptop.v1()
auth_token = auth_token or get_from_env("auth_token", "MOMENTO_AUTH_TOKEN")
credentials = CredentialProvider.from_string(auth_token)
cache_client = CacheClient(configuration, credentials, default_ttl=ttl)
return cls(cache_client, cache_name, ttl=ttl, **kwargs)
def __key(self, prompt: str, llm_string: str) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | langchain/cache.py | """Compute cache key from prompt and associated model and settings.
Args:
prompt (str): The prompt run through the language model.
llm_string (str): The language model version and settings.
Returns:
str: The cache key.
"""
return _hash(prompt + llm_string)
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | langchain/cache.py | """Lookup llm generations in cache by prompt and associated model and settings.
Args:
prompt (str): The prompt run through the language model.
llm_string (str): The language model version and settings.
Raises:
SdkException: Momento service or network error
Returns:
Optional[RETURN_VAL_TYPE]: A list of language model generations.
"""
from momento.responses import CacheGet
generations: RETURN_VAL_TYPE = []
get_response = self.cache_client.get(
self.cache_name, self.__key(prompt, llm_string)
)
if isinstance(get_response, CacheGet.Hit):
value = get_response.value_string
generations = _load_generations_from_json(value)
elif isinstance(get_response, CacheGet.Miss):
pass
elif isinstance(get_response, CacheGet.Error):
raise get_response.inner_exception
return generations if generations else None
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | langchain/cache.py | """Store llm generations in cache.
Args:
prompt (str): The prompt run through the language model.
llm_string (str): The language model string.
return_val (RETURN_VAL_TYPE): A list of language model generations.
Raises:
SdkException: Momento service or network error
Exception: Unexpected response
"""
for gen in return_val:
if not isinstance(gen, Generation):
raise ValueError(
"Momento only supports caching of normal LLM generations, "
f"got {type(gen)}"
)
key = self.__key(prompt, llm_string)
value = _dump_generations_to_json(return_val)
set_response = self.cache_client.set(self.cache_name, key, value, self.ttl)
from momento.responses import CacheSet
if isinstance(set_response, CacheSet.Success):
pass
elif isinstance(set_response, CacheSet.Error):
raise set_response.inner_exception
else:
raise Exception(f"Unexpected response: {set_response}")
def clear(self, **kwargs: Any) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | langchain/cache.py | """Clear the cache.
Raises:
SdkException: Momento service or network error
"""
from momento.responses import CacheFlush
flush_response = self.cache_client.flush_cache(self.cache_name)
if isinstance(flush_response, CacheFlush.Success):
pass
elif isinstance(flush_response, CacheFlush.Error):
raise flush_response.inner_exception |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | tests/unit_tests/test_cache.py | """Test caching for LLMs and ChatModels."""
from typing import Dict, Generator, List, Union
import pytest
from _pytest.fixtures import FixtureRequest
from sqlalchemy import create_engine
from sqlalchemy.orm import Session
import langchain
from langchain.cache import (
InMemoryCache,
SQLAlchemyCache,
)
from langchain.chat_models import FakeListChatModel
from langchain.chat_models.base import BaseChatModel, dumps
from langchain.llms import FakeListLLM
from langchain.llms.base import BaseLLM
from langchain.schema import (
ChatGeneration,
Generation,
)
from langchain.schema.messages import AIMessage, BaseMessage, HumanMessage
def get_sqlite_cache() -> SQLAlchemyCache: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | tests/unit_tests/test_cache.py | return SQLAlchemyCache(engine=create_engine("sqlite://"))
CACHE_OPTIONS = [
InMemoryCache,
get_sqlite_cache,
]
@pytest.fixture(autouse=True, params=CACHE_OPTIONS)
def set_cache_and_teardown(request: FixtureRequest) -> Generator[None, None, None]:
cache_instance = request.param
langchain.llm_cache = cache_instance()
if langchain.llm_cache:
langchain.llm_cache.clear()
else:
raise ValueError("Cache not set. This should never happen.")
yield
if langchain.llm_cache:
langchain.llm_cache.clear()
langchain.llm_cache = None
else:
raise ValueError("Cache not set. This should never happen.")
def test_llm_caching() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | tests/unit_tests/test_cache.py | prompt = "How are you?"
response = "Test response"
cached_response = "Cached test response"
llm = FakeListLLM(responses=[response])
if langchain.llm_cache:
langchain.llm_cache.update(
prompt=prompt,
llm_string=create_llm_string(llm),
return_val=[Generation(text=cached_response)],
)
assert llm(prompt) == cached_response
else:
raise ValueError(
"The cache not set. This should never happen, as the pytest fixture "
"`set_cache_and_teardown` always sets the cache."
)
def test_old_sqlite_llm_caching() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | tests/unit_tests/test_cache.py | if isinstance(langchain.llm_cache, SQLAlchemyCache):
prompt = "How are you?"
response = "Test response"
cached_response = "Cached test response"
llm = FakeListLLM(responses=[response])
items = [
langchain.llm_cache.cache_schema(
prompt=prompt,
llm=create_llm_string(llm),
response=cached_response,
idx=0,
)
]
with Session(langchain.llm_cache.engine) as session, session.begin():
for item in items:
session.merge(item)
assert llm(prompt) == cached_response
def test_chat_model_caching() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,652 | SQLite LLM cache clear does not take effect | ### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion database change. The deletion doesn't take effect.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Configure SQLite LLM Cache
- Call an LLM via langchain
- The SQLite database get's populated with an entry
- call langchain.llm_cache.clear()
- Actual Behaviour: Notice that the entry is still in SQLite
### Expected behavior
- Expected Behaviour: The cache database table should be empty | https://github.com/langchain-ai/langchain/issues/7652 | https://github.com/langchain-ai/langchain/pull/7653 | c17a80f11c200e2f7a65b54eb2f2942b8a6ea3bd | 24c165420827305e813f4b6d501f93d18f6d46a4 | "2023-07-13T12:36:48Z" | python | "2023-07-13T13:39:04Z" | tests/unit_tests/test_cache.py | prompt: List[BaseMessage] = [HumanMessage(content="How are you?")]
response = "Test response"
cached_response = "Cached test response"
cached_message = AIMessage(content=cached_response)
llm = FakeListChatModel(responses=[response])
if langchain.llm_cache:
langchain.llm_cache.update(
prompt=dumps(prompt),
llm_string=llm._get_llm_string(),
return_val=[ChatGeneration(message=cached_message)],
)
result = llm(prompt)
assert isinstance(result, AIMessage)
assert result.content == cached_response
else:
raise ValueError(
"The cache not set. This should never happen, as the pytest fixture "
"`set_cache_and_teardown` always sets the cache."
)
def test_chat_model_caching_params() -> None: |
Subsets and Splits