status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
233
| body
stringlengths 0
186k
⌀ | issue_url
stringlengths 38
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown | updated_file
stringlengths 7
188
| chunk_content
stringlengths 1
1.03M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,131 | Azure Cognitive Search Vector Store doesn't apply search_kwargs when performing queries | ### System Info
Langchain 0.0.199
Python 3.10.11
Windows 11 (but will occur on any platform.
### Who can help?
@hwchase17
@ruoccofabrizio
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
To reproduce this issue create an AzureSearch Vector Store and a RetrievalQA with a search_kwargs, like in this sample code:
```
import os
cognitive_search_name = os.environ["AZURE_SEARCH_SERVICE_NAME"]
vector_store_address: str = f"https://{cognitive_search_name}.search.windows.net/"
index_name: str = os.environ["AZURE_SEARCH_SERVICE_INDEX_NAME"]
vector_store_password: str = os.environ["AZURE_SEARCH_SERVICE_ADMIN_KEY"]
from langchain.vectorstores.azuresearch import AzureSearch
embeddings = OpenAIEmbeddings(model="text-embedding-ada-002", chunk_size=1, client=any)
vector_store = AzureSearch(azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query)
from langchain.chains import RetrievalQA
llm = AzureChatOpenAI(deployment_name="gpt35", model_name="gpt-3.5-turbo-0301", openai_api_version="2023-03-15-preview", temperature=temperature, client=None)
index = get_vector_store()
retriever = index.as_retriever()
retriever.search_kwargs = {'filters': "metadata eq 'something'"}
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=retriever,
)
return qa
```
When you execute this code using ```qa``` the search_kwargs appear in the method ```similarity_search``` in ```azuresearch.py``` but are never passed to the methods ```vector_search```, ```hybrid_search```, and ```semantic_hybrid``` where they actually would be used.
### Expected behavior
In my example they should apply a filter to the azure cognitive search index before doing the vector search, but this is not happening because filters will always be empty when it gets to the functions where they are used. (```vector_search```, ```hybrid_search```, and ```semantic_hybrid```) | https://github.com/langchain-ai/langchain/issues/6131 | https://github.com/langchain-ai/langchain/pull/6132 | 395a2a3724507bafc7afe9e04ecbae60a7c66c7e | 22862043543e55fa0467c739714230eae3425512 | "2023-06-14T02:08:49Z" | python | "2023-06-19T00:39:06Z" | langchain/vectorstores/azuresearch.py | self, query: str, k: int = 4, filters: Optional[str] = None
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query with an hybrid query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents most similar to the query and score for each
"""
from azure.search.documents.models import Vector
results = self.client.search(
search_text=query,
vector=Vector(
value=np.array(
self.embedding_function(query), dtype=np.float32
).tolist(),
k=k,
fields=FIELDS_CONTENT_VECTOR,
),
select=[f"{FIELDS_ID},{FIELDS_CONTENT},{FIELDS_METADATA}"],
filter=filters, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,131 | Azure Cognitive Search Vector Store doesn't apply search_kwargs when performing queries | ### System Info
Langchain 0.0.199
Python 3.10.11
Windows 11 (but will occur on any platform.
### Who can help?
@hwchase17
@ruoccofabrizio
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
To reproduce this issue create an AzureSearch Vector Store and a RetrievalQA with a search_kwargs, like in this sample code:
```
import os
cognitive_search_name = os.environ["AZURE_SEARCH_SERVICE_NAME"]
vector_store_address: str = f"https://{cognitive_search_name}.search.windows.net/"
index_name: str = os.environ["AZURE_SEARCH_SERVICE_INDEX_NAME"]
vector_store_password: str = os.environ["AZURE_SEARCH_SERVICE_ADMIN_KEY"]
from langchain.vectorstores.azuresearch import AzureSearch
embeddings = OpenAIEmbeddings(model="text-embedding-ada-002", chunk_size=1, client=any)
vector_store = AzureSearch(azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query)
from langchain.chains import RetrievalQA
llm = AzureChatOpenAI(deployment_name="gpt35", model_name="gpt-3.5-turbo-0301", openai_api_version="2023-03-15-preview", temperature=temperature, client=None)
index = get_vector_store()
retriever = index.as_retriever()
retriever.search_kwargs = {'filters': "metadata eq 'something'"}
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=retriever,
)
return qa
```
When you execute this code using ```qa``` the search_kwargs appear in the method ```similarity_search``` in ```azuresearch.py``` but are never passed to the methods ```vector_search```, ```hybrid_search```, and ```semantic_hybrid``` where they actually would be used.
### Expected behavior
In my example they should apply a filter to the azure cognitive search index before doing the vector search, but this is not happening because filters will always be empty when it gets to the functions where they are used. (```vector_search```, ```hybrid_search```, and ```semantic_hybrid```) | https://github.com/langchain-ai/langchain/issues/6131 | https://github.com/langchain-ai/langchain/pull/6132 | 395a2a3724507bafc7afe9e04ecbae60a7c66c7e | 22862043543e55fa0467c739714230eae3425512 | "2023-06-14T02:08:49Z" | python | "2023-06-19T00:39:06Z" | langchain/vectorstores/azuresearch.py | top=k,
)
docs = [
(
Document(
page_content=result[FIELDS_CONTENT],
metadata=json.loads(result[FIELDS_METADATA]),
),
float(result["@search.score"]),
)
for result in results
]
return docs
def semantic_hybrid_search(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Document]:
"""
Returns the most similar indexed documents to the query text.
Args:
query (str): The query text for which to find similar documents.
k (int): The number of documents to return. Default is 4.
Returns:
List[Document]: A list of documents that are most similar to the query text.
"""
docs_and_scores = self.semantic_hybrid_search_with_score(
query, k=k, filters=kwargs.get("filters", None)
)
return [doc for doc, _ in docs_and_scores]
def semantic_hybrid_search_with_score( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,131 | Azure Cognitive Search Vector Store doesn't apply search_kwargs when performing queries | ### System Info
Langchain 0.0.199
Python 3.10.11
Windows 11 (but will occur on any platform.
### Who can help?
@hwchase17
@ruoccofabrizio
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
To reproduce this issue create an AzureSearch Vector Store and a RetrievalQA with a search_kwargs, like in this sample code:
```
import os
cognitive_search_name = os.environ["AZURE_SEARCH_SERVICE_NAME"]
vector_store_address: str = f"https://{cognitive_search_name}.search.windows.net/"
index_name: str = os.environ["AZURE_SEARCH_SERVICE_INDEX_NAME"]
vector_store_password: str = os.environ["AZURE_SEARCH_SERVICE_ADMIN_KEY"]
from langchain.vectorstores.azuresearch import AzureSearch
embeddings = OpenAIEmbeddings(model="text-embedding-ada-002", chunk_size=1, client=any)
vector_store = AzureSearch(azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query)
from langchain.chains import RetrievalQA
llm = AzureChatOpenAI(deployment_name="gpt35", model_name="gpt-3.5-turbo-0301", openai_api_version="2023-03-15-preview", temperature=temperature, client=None)
index = get_vector_store()
retriever = index.as_retriever()
retriever.search_kwargs = {'filters': "metadata eq 'something'"}
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=retriever,
)
return qa
```
When you execute this code using ```qa``` the search_kwargs appear in the method ```similarity_search``` in ```azuresearch.py``` but are never passed to the methods ```vector_search```, ```hybrid_search```, and ```semantic_hybrid``` where they actually would be used.
### Expected behavior
In my example they should apply a filter to the azure cognitive search index before doing the vector search, but this is not happening because filters will always be empty when it gets to the functions where they are used. (```vector_search```, ```hybrid_search```, and ```semantic_hybrid```) | https://github.com/langchain-ai/langchain/issues/6131 | https://github.com/langchain-ai/langchain/pull/6132 | 395a2a3724507bafc7afe9e04ecbae60a7c66c7e | 22862043543e55fa0467c739714230eae3425512 | "2023-06-14T02:08:49Z" | python | "2023-06-19T00:39:06Z" | langchain/vectorstores/azuresearch.py | self, query: str, k: int = 4, filters: Optional[str] = None
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query with an hybrid query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents most similar to the query and score for each
"""
from azure.search.documents.models import Vector
results = self.client.search(
search_text=query,
vector=Vector(
value=np.array(
self.embedding_function(query), dtype=np.float32
).tolist(),
k=50,
fields=FIELDS_CONTENT_VECTOR,
),
select=[f"{FIELDS_ID},{FIELDS_CONTENT},{FIELDS_METADATA}"],
filter=filters,
query_type="semantic",
query_language=self.semantic_query_language,
semantic_configuration_name=self.semantic_configuration_name, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,131 | Azure Cognitive Search Vector Store doesn't apply search_kwargs when performing queries | ### System Info
Langchain 0.0.199
Python 3.10.11
Windows 11 (but will occur on any platform.
### Who can help?
@hwchase17
@ruoccofabrizio
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
To reproduce this issue create an AzureSearch Vector Store and a RetrievalQA with a search_kwargs, like in this sample code:
```
import os
cognitive_search_name = os.environ["AZURE_SEARCH_SERVICE_NAME"]
vector_store_address: str = f"https://{cognitive_search_name}.search.windows.net/"
index_name: str = os.environ["AZURE_SEARCH_SERVICE_INDEX_NAME"]
vector_store_password: str = os.environ["AZURE_SEARCH_SERVICE_ADMIN_KEY"]
from langchain.vectorstores.azuresearch import AzureSearch
embeddings = OpenAIEmbeddings(model="text-embedding-ada-002", chunk_size=1, client=any)
vector_store = AzureSearch(azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query)
from langchain.chains import RetrievalQA
llm = AzureChatOpenAI(deployment_name="gpt35", model_name="gpt-3.5-turbo-0301", openai_api_version="2023-03-15-preview", temperature=temperature, client=None)
index = get_vector_store()
retriever = index.as_retriever()
retriever.search_kwargs = {'filters': "metadata eq 'something'"}
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=retriever,
)
return qa
```
When you execute this code using ```qa``` the search_kwargs appear in the method ```similarity_search``` in ```azuresearch.py``` but are never passed to the methods ```vector_search```, ```hybrid_search```, and ```semantic_hybrid``` where they actually would be used.
### Expected behavior
In my example they should apply a filter to the azure cognitive search index before doing the vector search, but this is not happening because filters will always be empty when it gets to the functions where they are used. (```vector_search```, ```hybrid_search```, and ```semantic_hybrid```) | https://github.com/langchain-ai/langchain/issues/6131 | https://github.com/langchain-ai/langchain/pull/6132 | 395a2a3724507bafc7afe9e04ecbae60a7c66c7e | 22862043543e55fa0467c739714230eae3425512 | "2023-06-14T02:08:49Z" | python | "2023-06-19T00:39:06Z" | langchain/vectorstores/azuresearch.py | query_caption="extractive",
query_answer="extractive",
top=k,
)
semantic_answers = results.get_answers()
semantic_answers_dict = {}
for semantic_answer in semantic_answers:
semantic_answers_dict[semantic_answer.key] = {
"text": semantic_answer.text,
"highlights": semantic_answer.highlights,
}
docs = [
(
Document(
page_content=result["content"],
metadata={
**json.loads(result["metadata"]),
**{
"captions": {
"text": result.get("@search.captions", [{}])[0].text,
"highlights": result.get("@search.captions", [{}])[
0
].highlights,
}
if result.get("@search.captions")
else {},
"answers": semantic_answers_dict.get(
json.loads(result["metadata"]).get("key"), "" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,131 | Azure Cognitive Search Vector Store doesn't apply search_kwargs when performing queries | ### System Info
Langchain 0.0.199
Python 3.10.11
Windows 11 (but will occur on any platform.
### Who can help?
@hwchase17
@ruoccofabrizio
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
To reproduce this issue create an AzureSearch Vector Store and a RetrievalQA with a search_kwargs, like in this sample code:
```
import os
cognitive_search_name = os.environ["AZURE_SEARCH_SERVICE_NAME"]
vector_store_address: str = f"https://{cognitive_search_name}.search.windows.net/"
index_name: str = os.environ["AZURE_SEARCH_SERVICE_INDEX_NAME"]
vector_store_password: str = os.environ["AZURE_SEARCH_SERVICE_ADMIN_KEY"]
from langchain.vectorstores.azuresearch import AzureSearch
embeddings = OpenAIEmbeddings(model="text-embedding-ada-002", chunk_size=1, client=any)
vector_store = AzureSearch(azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query)
from langchain.chains import RetrievalQA
llm = AzureChatOpenAI(deployment_name="gpt35", model_name="gpt-3.5-turbo-0301", openai_api_version="2023-03-15-preview", temperature=temperature, client=None)
index = get_vector_store()
retriever = index.as_retriever()
retriever.search_kwargs = {'filters': "metadata eq 'something'"}
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=retriever,
)
return qa
```
When you execute this code using ```qa``` the search_kwargs appear in the method ```similarity_search``` in ```azuresearch.py``` but are never passed to the methods ```vector_search```, ```hybrid_search```, and ```semantic_hybrid``` where they actually would be used.
### Expected behavior
In my example they should apply a filter to the azure cognitive search index before doing the vector search, but this is not happening because filters will always be empty when it gets to the functions where they are used. (```vector_search```, ```hybrid_search```, and ```semantic_hybrid```) | https://github.com/langchain-ai/langchain/issues/6131 | https://github.com/langchain-ai/langchain/pull/6132 | 395a2a3724507bafc7afe9e04ecbae60a7c66c7e | 22862043543e55fa0467c739714230eae3425512 | "2023-06-14T02:08:49Z" | python | "2023-06-19T00:39:06Z" | langchain/vectorstores/azuresearch.py | ),
},
},
),
float(result["@search.score"]),
)
for result in results
]
return docs
@classmethod
def from_texts(
cls: Type[AzureSearch],
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
azure_search_endpoint: str = "",
azure_search_key: str = "",
index_name: str = "langchain-index",
**kwargs: Any,
) -> AzureSearch:
azure_search = cls(
azure_search_endpoint,
azure_search_key,
index_name,
embedding.embed_query,
)
azure_search.add_texts(texts, metadatas, **kwargs)
return azure_search
class AzureSearchVectorStoreRetriever(BaseRetriever, BaseModel): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,131 | Azure Cognitive Search Vector Store doesn't apply search_kwargs when performing queries | ### System Info
Langchain 0.0.199
Python 3.10.11
Windows 11 (but will occur on any platform.
### Who can help?
@hwchase17
@ruoccofabrizio
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
To reproduce this issue create an AzureSearch Vector Store and a RetrievalQA with a search_kwargs, like in this sample code:
```
import os
cognitive_search_name = os.environ["AZURE_SEARCH_SERVICE_NAME"]
vector_store_address: str = f"https://{cognitive_search_name}.search.windows.net/"
index_name: str = os.environ["AZURE_SEARCH_SERVICE_INDEX_NAME"]
vector_store_password: str = os.environ["AZURE_SEARCH_SERVICE_ADMIN_KEY"]
from langchain.vectorstores.azuresearch import AzureSearch
embeddings = OpenAIEmbeddings(model="text-embedding-ada-002", chunk_size=1, client=any)
vector_store = AzureSearch(azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query)
from langchain.chains import RetrievalQA
llm = AzureChatOpenAI(deployment_name="gpt35", model_name="gpt-3.5-turbo-0301", openai_api_version="2023-03-15-preview", temperature=temperature, client=None)
index = get_vector_store()
retriever = index.as_retriever()
retriever.search_kwargs = {'filters': "metadata eq 'something'"}
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=retriever,
)
return qa
```
When you execute this code using ```qa``` the search_kwargs appear in the method ```similarity_search``` in ```azuresearch.py``` but are never passed to the methods ```vector_search```, ```hybrid_search```, and ```semantic_hybrid``` where they actually would be used.
### Expected behavior
In my example they should apply a filter to the azure cognitive search index before doing the vector search, but this is not happening because filters will always be empty when it gets to the functions where they are used. (```vector_search```, ```hybrid_search```, and ```semantic_hybrid```) | https://github.com/langchain-ai/langchain/issues/6131 | https://github.com/langchain-ai/langchain/pull/6132 | 395a2a3724507bafc7afe9e04ecbae60a7c66c7e | 22862043543e55fa0467c739714230eae3425512 | "2023-06-14T02:08:49Z" | python | "2023-06-19T00:39:06Z" | langchain/vectorstores/azuresearch.py | vectorstore: AzureSearch
search_type: str = "hybrid"
k: int = 4
class Config:
"""Configuration for this pydantic object."""
arbitrary_types_allowed = True
@root_validator()
def validate_search_type(cls, values: Dict) -> Dict:
"""Validate search type."""
if "search_type" in values:
search_type = values["search_type"]
if search_type not in ("similarity", "hybrid", "semantic_hybrid"):
raise ValueError(f"search_type of {search_type} not allowed.")
return values
def get_relevant_documents(self, query: str) -> List[Document]:
if self.search_type == "similarity":
docs = self.vectorstore.vector_search(query, k=self.k)
elif self.search_type == "hybrid":
docs = self.vectorstore.hybrid_search(query, k=self.k)
elif self.search_type == "semantic_hybrid":
docs = self.vectorstore.semantic_hybrid_search(query, k=self.k)
else:
raise ValueError(f"search_type of {self.search_type} not allowed.")
return docs
async def aget_relevant_documents(self, query: str) -> List[Document]:
raise NotImplementedError(
"AzureSearchVectorStoreRetriever does not support async"
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,483 | [SSL: CERTIFICATE_VERIFY_FAILED] while load from SitemapLoader | ### System Info
langchain: 0.0.181
platform: windows
python: 3.11.3
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```py
site_loader = SitemapLoader(web_path="https://help.glueup.com/sitemap_index.xml")
docs = site_loader.load()
print(docs[0])
# ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1002)
```
### Expected behavior
print the frist doc | https://github.com/langchain-ai/langchain/issues/5483 | https://github.com/langchain-ai/langchain/pull/6256 | 10bff4ecc420317a86043a8f0287363618be77e6 | b2b9ded12facf3ae205eb4b1cbb455eca6af8977 | "2023-05-31T07:52:33Z" | python | "2023-06-19T01:34:18Z" | langchain/document_loaders/web_base.py | """Web base loader class."""
import asyncio
import logging
import warnings
from typing import Any, Dict, List, Optional, Union
import aiohttp
import requests
from langchain.docstore.document import Document
from langchain.document_loaders.base import BaseLoader
logger = logging.getLogger(__name__)
default_header_template = {
"User-Agent": "",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*"
";q=0.8",
"Accept-Language": "en-US,en;q=0.5",
"Referer": "https://www.google.com/",
"DNT": "1",
"Connection": "keep-alive",
"Upgrade-Insecure-Requests": "1",
}
def _build_metadata(soup: Any, url: str) -> dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,483 | [SSL: CERTIFICATE_VERIFY_FAILED] while load from SitemapLoader | ### System Info
langchain: 0.0.181
platform: windows
python: 3.11.3
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```py
site_loader = SitemapLoader(web_path="https://help.glueup.com/sitemap_index.xml")
docs = site_loader.load()
print(docs[0])
# ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1002)
```
### Expected behavior
print the frist doc | https://github.com/langchain-ai/langchain/issues/5483 | https://github.com/langchain-ai/langchain/pull/6256 | 10bff4ecc420317a86043a8f0287363618be77e6 | b2b9ded12facf3ae205eb4b1cbb455eca6af8977 | "2023-05-31T07:52:33Z" | python | "2023-06-19T01:34:18Z" | langchain/document_loaders/web_base.py | """Build metadata from BeautifulSoup output."""
metadata = {"source": url}
if title := soup.find("title"):
metadata["title"] = title.get_text()
if description := soup.find("meta", attrs={"name": "description"}):
metadata["description"] = description.get("content", None)
if html := soup.find("html"):
metadata["language"] = html.get("lang", None)
return metadata
class WebBaseLoader(BaseLoader):
"""Loader that uses urllib and beautiful soup to load webpages."""
web_paths: List[str]
requests_per_second: int = 2
"""Max number of concurrent requests to make."""
default_parser: str = "html.parser"
"""Default parser to use for BeautifulSoup."""
requests_kwargs: Dict[str, Any] = {}
"""kwargs for requests"""
def __init__( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,483 | [SSL: CERTIFICATE_VERIFY_FAILED] while load from SitemapLoader | ### System Info
langchain: 0.0.181
platform: windows
python: 3.11.3
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```py
site_loader = SitemapLoader(web_path="https://help.glueup.com/sitemap_index.xml")
docs = site_loader.load()
print(docs[0])
# ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1002)
```
### Expected behavior
print the frist doc | https://github.com/langchain-ai/langchain/issues/5483 | https://github.com/langchain-ai/langchain/pull/6256 | 10bff4ecc420317a86043a8f0287363618be77e6 | b2b9ded12facf3ae205eb4b1cbb455eca6af8977 | "2023-05-31T07:52:33Z" | python | "2023-06-19T01:34:18Z" | langchain/document_loaders/web_base.py | self,
web_path: Union[str, List[str]],
header_template: Optional[dict] = None,
verify: Optional[bool] = True,
):
"""Initialize with webpage path."""
if isinstance(web_path, str):
self.web_paths = [web_path]
elif isinstance(web_path, List):
self.web_paths = web_path
self.session = requests.Session()
try:
import bs4
except ImportError:
raise ValueError(
"bs4 package not found, please install it with " "`pip install bs4`"
)
self.verify = verify
headers = header_template or default_header_template
if not headers.get("User-Agent"):
try:
from fake_useragent import UserAgent
headers["User-Agent"] = UserAgent().random
except ImportError:
logger.info(
"fake_useragent not found, using default user agent." |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,483 | [SSL: CERTIFICATE_VERIFY_FAILED] while load from SitemapLoader | ### System Info
langchain: 0.0.181
platform: windows
python: 3.11.3
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```py
site_loader = SitemapLoader(web_path="https://help.glueup.com/sitemap_index.xml")
docs = site_loader.load()
print(docs[0])
# ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1002)
```
### Expected behavior
print the frist doc | https://github.com/langchain-ai/langchain/issues/5483 | https://github.com/langchain-ai/langchain/pull/6256 | 10bff4ecc420317a86043a8f0287363618be77e6 | b2b9ded12facf3ae205eb4b1cbb455eca6af8977 | "2023-05-31T07:52:33Z" | python | "2023-06-19T01:34:18Z" | langchain/document_loaders/web_base.py | "To get a realistic header for requests, "
"`pip install fake_useragent`."
)
self.session.headers = dict(headers)
@property
def web_path(self) -> str:
if len(self.web_paths) > 1:
raise ValueError("Multiple webpaths found.")
return self.web_paths[0]
async def _fetch(
self, url: str, retries: int = 3, cooldown: int = 2, backoff: float = 1.5
) -> str:
async with aiohttp.ClientSession() as session:
for i in range(retries):
try:
async with session.get(
url, headers=self.session.headers, verify=self.verify
) as response:
return await response.text()
except aiohttp.ClientConnectionError as e:
if i == retries - 1:
raise
else:
logger.warning(
f"Error fetching {url} with attempt "
f"{i + 1}/{retries}: {e}. Retrying..."
)
await asyncio.sleep(cooldown * backoff**i)
raise ValueError("retry count exceeded")
async def _fetch_with_rate_limit( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,483 | [SSL: CERTIFICATE_VERIFY_FAILED] while load from SitemapLoader | ### System Info
langchain: 0.0.181
platform: windows
python: 3.11.3
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```py
site_loader = SitemapLoader(web_path="https://help.glueup.com/sitemap_index.xml")
docs = site_loader.load()
print(docs[0])
# ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1002)
```
### Expected behavior
print the frist doc | https://github.com/langchain-ai/langchain/issues/5483 | https://github.com/langchain-ai/langchain/pull/6256 | 10bff4ecc420317a86043a8f0287363618be77e6 | b2b9ded12facf3ae205eb4b1cbb455eca6af8977 | "2023-05-31T07:52:33Z" | python | "2023-06-19T01:34:18Z" | langchain/document_loaders/web_base.py | self, url: str, semaphore: asyncio.Semaphore
) -> str:
async with semaphore:
return await self._fetch(url)
async def fetch_all(self, urls: List[str]) -> Any:
"""Fetch all urls concurrently with rate limiting."""
semaphore = asyncio.Semaphore(self.requests_per_second)
tasks = []
for url in urls:
task = asyncio.ensure_future(self._fetch_with_rate_limit(url, semaphore))
tasks.append(task)
try:
from tqdm.asyncio import tqdm_asyncio
return await tqdm_asyncio.gather(
*tasks, desc="Fetching pages", ascii=True, mininterval=1
)
except ImportError:
warnings.warn("For better logging of progress, `pip install tqdm`")
return await asyncio.gather(*tasks)
@staticmethod
def _check_parser(parser: str) -> None:
"""Check that parser is valid for bs4."""
valid_parsers = ["html.parser", "lxml", "xml", "lxml-xml", "html5lib"]
if parser not in valid_parsers:
raise ValueError(
"`parser` must be one of " + ", ".join(valid_parsers) + "."
)
def scrape_all(self, urls: List[str], parser: Union[str, None] = None) -> List[Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,483 | [SSL: CERTIFICATE_VERIFY_FAILED] while load from SitemapLoader | ### System Info
langchain: 0.0.181
platform: windows
python: 3.11.3
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```py
site_loader = SitemapLoader(web_path="https://help.glueup.com/sitemap_index.xml")
docs = site_loader.load()
print(docs[0])
# ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1002)
```
### Expected behavior
print the frist doc | https://github.com/langchain-ai/langchain/issues/5483 | https://github.com/langchain-ai/langchain/pull/6256 | 10bff4ecc420317a86043a8f0287363618be77e6 | b2b9ded12facf3ae205eb4b1cbb455eca6af8977 | "2023-05-31T07:52:33Z" | python | "2023-06-19T01:34:18Z" | langchain/document_loaders/web_base.py | """Fetch all urls, then return soups for all results."""
from bs4 import BeautifulSoup
results = asyncio.run(self.fetch_all(urls))
final_results = []
for i, result in enumerate(results):
url = urls[i]
if parser is None:
if url.endswith(".xml"):
parser = "xml"
else:
parser = self.default_parser
self._check_parser(parser)
final_results.append(BeautifulSoup(result, parser))
return final_results
def _scrape(self, url: str, parser: Union[str, None] = None) -> Any:
from bs4 import BeautifulSoup
if parser is None:
if url.endswith(".xml"):
parser = "xml"
else:
parser = self.default_parser
self._check_parser(parser)
html_doc = self.session.get(url, verify=self.verify, **self.requests_kwargs)
html_doc.encoding = html_doc.apparent_encoding
return BeautifulSoup(html_doc.text, parser)
def scrape(self, parser: Union[str, None] = None) -> Any: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,483 | [SSL: CERTIFICATE_VERIFY_FAILED] while load from SitemapLoader | ### System Info
langchain: 0.0.181
platform: windows
python: 3.11.3
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```py
site_loader = SitemapLoader(web_path="https://help.glueup.com/sitemap_index.xml")
docs = site_loader.load()
print(docs[0])
# ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1002)
```
### Expected behavior
print the frist doc | https://github.com/langchain-ai/langchain/issues/5483 | https://github.com/langchain-ai/langchain/pull/6256 | 10bff4ecc420317a86043a8f0287363618be77e6 | b2b9ded12facf3ae205eb4b1cbb455eca6af8977 | "2023-05-31T07:52:33Z" | python | "2023-06-19T01:34:18Z" | langchain/document_loaders/web_base.py | """Scrape data from webpage and return it in BeautifulSoup format."""
if parser is None:
parser = self.default_parser
return self._scrape(self.web_path, parser)
def load(self) -> List[Document]:
"""Load text from the url(s) in web_path."""
docs = []
for path in self.web_paths:
soup = self._scrape(path)
text = soup.get_text()
metadata = _build_metadata(soup, path)
docs.append(Document(page_content=text, metadata=metadata))
return docs
def aload(self) -> List[Document]:
"""Load text from the urls in web_path async into Documents."""
results = self.scrape_all(self.web_paths)
docs = []
for i in range(len(results)):
soup = results[i]
text = soup.get_text()
metadata = _build_metadata(soup, self.web_paths[i])
docs.append(Document(page_content=text, metadata=metadata))
return docs |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,431 | ChatPromptTemplate with partial variables is giving validation error | ### System Info
langchain-0.0.205, python3.10
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Write this into Notebook cell
2. `from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
chat_prompt = ChatPromptTemplate(
messages=[
HumanMessagePromptTemplate.from_template("Do something with {question} using {context} giving it like {formatins}")
],
input_variables=["question", "context"],
partial_variables={"formatins": "some structure"}
)
`
3. It it throwing following error:
`Error:
ValidationError: 1 validation error for ChatPromptTemplate __root__ Got mismatched input_variables. Expected: {'formatins', 'question', 'context'}. Got: ['question', 'context'] (type=value_error)`
4. This was working until 24 hours ago. Potentially related to recent commit to langchain/prompts/chat.py.
### Expected behavior
The chat_prompt should get created with the partial variables injected.
If this is expected change, can you please help with suggesting what should be the new way to use partial_variables?
Thanks | https://github.com/langchain-ai/langchain/issues/6431 | https://github.com/langchain-ai/langchain/pull/6456 | 02c0a1e77eb9636850c8c29da33885a32b4cc2eb | 6efd5fa2b9d46c7b4db6ad638097f010b745f0cc | "2023-06-19T16:15:49Z" | python | "2023-06-20T05:08:15Z" | langchain/prompts/chat.py | """Chat prompt template."""
from __future__ import annotations
from abc import ABC, abstractmethod
from pathlib import Path
from typing import Any, Callable, List, Sequence, Tuple, Type, TypeVar, Union
from pydantic import Field, root_validator
from langchain.load.serializable import Serializable
from langchain.memory.buffer import get_buffer_string
from langchain.prompts.base import BasePromptTemplate, StringPromptTemplate
from langchain.prompts.prompt import PromptTemplate
from langchain.schema import (
AIMessage,
BaseMessage,
ChatMessage,
HumanMessage,
PromptValue,
SystemMessage,
)
class BaseMessagePromptTemplate(Serializable, ABC):
@property
def lc_serializable(self) -> bool:
return True
@abstractmethod
def format_messages(self, **kwargs: Any) -> List[BaseMessage]:
"""To messages."""
@property
@abstractmethod
def input_variables(self) -> List[str]:
"""Input variables for this prompt template."""
class MessagesPlaceholder(BaseMessagePromptTemplate): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,431 | ChatPromptTemplate with partial variables is giving validation error | ### System Info
langchain-0.0.205, python3.10
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Write this into Notebook cell
2. `from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
chat_prompt = ChatPromptTemplate(
messages=[
HumanMessagePromptTemplate.from_template("Do something with {question} using {context} giving it like {formatins}")
],
input_variables=["question", "context"],
partial_variables={"formatins": "some structure"}
)
`
3. It it throwing following error:
`Error:
ValidationError: 1 validation error for ChatPromptTemplate __root__ Got mismatched input_variables. Expected: {'formatins', 'question', 'context'}. Got: ['question', 'context'] (type=value_error)`
4. This was working until 24 hours ago. Potentially related to recent commit to langchain/prompts/chat.py.
### Expected behavior
The chat_prompt should get created with the partial variables injected.
If this is expected change, can you please help with suggesting what should be the new way to use partial_variables?
Thanks | https://github.com/langchain-ai/langchain/issues/6431 | https://github.com/langchain-ai/langchain/pull/6456 | 02c0a1e77eb9636850c8c29da33885a32b4cc2eb | 6efd5fa2b9d46c7b4db6ad638097f010b745f0cc | "2023-06-19T16:15:49Z" | python | "2023-06-20T05:08:15Z" | langchain/prompts/chat.py | """Prompt template that assumes variable is already list of messages."""
variable_name: str
def format_messages(self, **kwargs: Any) -> List[BaseMessage]:
"""To a BaseMessage."""
value = kwargs[self.variable_name]
if not isinstance(value, list):
raise ValueError(
f"variable {self.variable_name} should be a list of base messages, "
f"got {value}"
)
for v in value:
if not isinstance(v, BaseMessage):
raise ValueError(
f"variable {self.variable_name} should be a list of base messages,"
f" got {value}"
)
return value
@property
def input_variables(self) -> List[str]:
"""Input variables for this prompt template."""
return [self.variable_name]
MessagePromptTemplateT = TypeVar(
"MessagePromptTemplateT", bound="BaseStringMessagePromptTemplate"
)
class BaseStringMessagePromptTemplate(BaseMessagePromptTemplate, ABC): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,431 | ChatPromptTemplate with partial variables is giving validation error | ### System Info
langchain-0.0.205, python3.10
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Write this into Notebook cell
2. `from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
chat_prompt = ChatPromptTemplate(
messages=[
HumanMessagePromptTemplate.from_template("Do something with {question} using {context} giving it like {formatins}")
],
input_variables=["question", "context"],
partial_variables={"formatins": "some structure"}
)
`
3. It it throwing following error:
`Error:
ValidationError: 1 validation error for ChatPromptTemplate __root__ Got mismatched input_variables. Expected: {'formatins', 'question', 'context'}. Got: ['question', 'context'] (type=value_error)`
4. This was working until 24 hours ago. Potentially related to recent commit to langchain/prompts/chat.py.
### Expected behavior
The chat_prompt should get created with the partial variables injected.
If this is expected change, can you please help with suggesting what should be the new way to use partial_variables?
Thanks | https://github.com/langchain-ai/langchain/issues/6431 | https://github.com/langchain-ai/langchain/pull/6456 | 02c0a1e77eb9636850c8c29da33885a32b4cc2eb | 6efd5fa2b9d46c7b4db6ad638097f010b745f0cc | "2023-06-19T16:15:49Z" | python | "2023-06-20T05:08:15Z" | langchain/prompts/chat.py | prompt: StringPromptTemplate
additional_kwargs: dict = Field(default_factory=dict)
@classmethod
def from_template(
cls: Type[MessagePromptTemplateT],
template: str,
template_format: str = "f-string",
**kwargs: Any,
) -> MessagePromptTemplateT:
prompt = PromptTemplate.from_template(template, template_format=template_format)
return cls(prompt=prompt, **kwargs)
@classmethod
def from_template_file(
cls: Type[MessagePromptTemplateT],
template_file: Union[str, Path],
input_variables: List[str],
**kwargs: Any,
) -> MessagePromptTemplateT:
prompt = PromptTemplate.from_file(template_file, input_variables)
return cls(prompt=prompt, **kwargs)
@abstractmethod
def format(self, **kwargs: Any) -> BaseMessage:
"""To a BaseMessage."""
def format_messages(self, **kwargs: Any) -> List[BaseMessage]:
return [self.format(**kwargs)]
@property
def input_variables(self) -> List[str]:
return self.prompt.input_variables
class ChatMessagePromptTemplate(BaseStringMessagePromptTemplate): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,431 | ChatPromptTemplate with partial variables is giving validation error | ### System Info
langchain-0.0.205, python3.10
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Write this into Notebook cell
2. `from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
chat_prompt = ChatPromptTemplate(
messages=[
HumanMessagePromptTemplate.from_template("Do something with {question} using {context} giving it like {formatins}")
],
input_variables=["question", "context"],
partial_variables={"formatins": "some structure"}
)
`
3. It it throwing following error:
`Error:
ValidationError: 1 validation error for ChatPromptTemplate __root__ Got mismatched input_variables. Expected: {'formatins', 'question', 'context'}. Got: ['question', 'context'] (type=value_error)`
4. This was working until 24 hours ago. Potentially related to recent commit to langchain/prompts/chat.py.
### Expected behavior
The chat_prompt should get created with the partial variables injected.
If this is expected change, can you please help with suggesting what should be the new way to use partial_variables?
Thanks | https://github.com/langchain-ai/langchain/issues/6431 | https://github.com/langchain-ai/langchain/pull/6456 | 02c0a1e77eb9636850c8c29da33885a32b4cc2eb | 6efd5fa2b9d46c7b4db6ad638097f010b745f0cc | "2023-06-19T16:15:49Z" | python | "2023-06-20T05:08:15Z" | langchain/prompts/chat.py | role: str
def format(self, **kwargs: Any) -> BaseMessage:
text = self.prompt.format(**kwargs)
return ChatMessage(
content=text, role=self.role, additional_kwargs=self.additional_kwargs
)
class HumanMessagePromptTemplate(BaseStringMessagePromptTemplate): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,431 | ChatPromptTemplate with partial variables is giving validation error | ### System Info
langchain-0.0.205, python3.10
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Write this into Notebook cell
2. `from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
chat_prompt = ChatPromptTemplate(
messages=[
HumanMessagePromptTemplate.from_template("Do something with {question} using {context} giving it like {formatins}")
],
input_variables=["question", "context"],
partial_variables={"formatins": "some structure"}
)
`
3. It it throwing following error:
`Error:
ValidationError: 1 validation error for ChatPromptTemplate __root__ Got mismatched input_variables. Expected: {'formatins', 'question', 'context'}. Got: ['question', 'context'] (type=value_error)`
4. This was working until 24 hours ago. Potentially related to recent commit to langchain/prompts/chat.py.
### Expected behavior
The chat_prompt should get created with the partial variables injected.
If this is expected change, can you please help with suggesting what should be the new way to use partial_variables?
Thanks | https://github.com/langchain-ai/langchain/issues/6431 | https://github.com/langchain-ai/langchain/pull/6456 | 02c0a1e77eb9636850c8c29da33885a32b4cc2eb | 6efd5fa2b9d46c7b4db6ad638097f010b745f0cc | "2023-06-19T16:15:49Z" | python | "2023-06-20T05:08:15Z" | langchain/prompts/chat.py | def format(self, **kwargs: Any) -> BaseMessage:
text = self.prompt.format(**kwargs)
return HumanMessage(content=text, additional_kwargs=self.additional_kwargs)
class AIMessagePromptTemplate(BaseStringMessagePromptTemplate):
def format(self, **kwargs: Any) -> BaseMessage:
text = self.prompt.format(**kwargs)
return AIMessage(content=text, additional_kwargs=self.additional_kwargs)
class SystemMessagePromptTemplate(BaseStringMessagePromptTemplate):
def format(self, **kwargs: Any) -> BaseMessage:
text = self.prompt.format(**kwargs)
return SystemMessage(content=text, additional_kwargs=self.additional_kwargs)
class ChatPromptValue(PromptValue):
messages: List[BaseMessage]
def to_string(self) -> str:
"""Return prompt as string."""
return get_buffer_string(self.messages)
def to_messages(self) -> List[BaseMessage]:
"""Return prompt as messages."""
return self.messages
class BaseChatPromptTemplate(BasePromptTemplate, ABC):
def format(self, **kwargs: Any) -> str:
return self.format_prompt(**kwargs).to_string()
def format_prompt(self, **kwargs: Any) -> PromptValue:
messages = self.format_messages(**kwargs)
return ChatPromptValue(messages=messages)
@abstractmethod
def format_messages(self, **kwargs: Any) -> List[BaseMessage]:
"""Format kwargs into a list of messages."""
class ChatPromptTemplate(BaseChatPromptTemplate, ABC): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,431 | ChatPromptTemplate with partial variables is giving validation error | ### System Info
langchain-0.0.205, python3.10
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Write this into Notebook cell
2. `from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
chat_prompt = ChatPromptTemplate(
messages=[
HumanMessagePromptTemplate.from_template("Do something with {question} using {context} giving it like {formatins}")
],
input_variables=["question", "context"],
partial_variables={"formatins": "some structure"}
)
`
3. It it throwing following error:
`Error:
ValidationError: 1 validation error for ChatPromptTemplate __root__ Got mismatched input_variables. Expected: {'formatins', 'question', 'context'}. Got: ['question', 'context'] (type=value_error)`
4. This was working until 24 hours ago. Potentially related to recent commit to langchain/prompts/chat.py.
### Expected behavior
The chat_prompt should get created with the partial variables injected.
If this is expected change, can you please help with suggesting what should be the new way to use partial_variables?
Thanks | https://github.com/langchain-ai/langchain/issues/6431 | https://github.com/langchain-ai/langchain/pull/6456 | 02c0a1e77eb9636850c8c29da33885a32b4cc2eb | 6efd5fa2b9d46c7b4db6ad638097f010b745f0cc | "2023-06-19T16:15:49Z" | python | "2023-06-20T05:08:15Z" | langchain/prompts/chat.py | input_variables: List[str]
messages: List[Union[BaseMessagePromptTemplate, BaseMessage]]
@root_validator(pre=True)
def validate_input_variables(cls, values: dict) -> dict:
messages = values["messages"]
input_vars = set()
for message in messages:
if isinstance(message, BaseMessagePromptTemplate):
input_vars.update(message.input_variables)
if "input_variables" in values:
if input_vars != set(values["input_variables"]):
raise ValueError(
"Got mismatched input_variables. "
f"Expected: {input_vars}. "
f"Got: {values['input_variables']}"
)
else:
values["input_variables"] = list(input_vars)
return values
@classmethod
def from_template(cls, template: str, **kwargs: Any) -> ChatPromptTemplate:
prompt_template = PromptTemplate.from_template(template, **kwargs)
message = HumanMessagePromptTemplate(prompt=prompt_template)
return cls.from_messages([message])
@classmethod
def from_role_strings( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,431 | ChatPromptTemplate with partial variables is giving validation error | ### System Info
langchain-0.0.205, python3.10
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Write this into Notebook cell
2. `from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
chat_prompt = ChatPromptTemplate(
messages=[
HumanMessagePromptTemplate.from_template("Do something with {question} using {context} giving it like {formatins}")
],
input_variables=["question", "context"],
partial_variables={"formatins": "some structure"}
)
`
3. It it throwing following error:
`Error:
ValidationError: 1 validation error for ChatPromptTemplate __root__ Got mismatched input_variables. Expected: {'formatins', 'question', 'context'}. Got: ['question', 'context'] (type=value_error)`
4. This was working until 24 hours ago. Potentially related to recent commit to langchain/prompts/chat.py.
### Expected behavior
The chat_prompt should get created with the partial variables injected.
If this is expected change, can you please help with suggesting what should be the new way to use partial_variables?
Thanks | https://github.com/langchain-ai/langchain/issues/6431 | https://github.com/langchain-ai/langchain/pull/6456 | 02c0a1e77eb9636850c8c29da33885a32b4cc2eb | 6efd5fa2b9d46c7b4db6ad638097f010b745f0cc | "2023-06-19T16:15:49Z" | python | "2023-06-20T05:08:15Z" | langchain/prompts/chat.py | cls, string_messages: List[Tuple[str, str]]
) -> ChatPromptTemplate:
messages = [
ChatMessagePromptTemplate(
prompt=PromptTemplate.from_template(template), role=role
)
for role, template in string_messages
]
return cls.from_messages(messages)
@classmethod
def from_strings(
cls, string_messages: List[Tuple[Type[BaseMessagePromptTemplate], str]]
) -> ChatPromptTemplate:
messages = [
role(prompt=PromptTemplate.from_template(template))
for role, template in string_messages
]
return cls.from_messages(messages)
@classmethod
def from_messages(
cls, messages: Sequence[Union[BaseMessagePromptTemplate, BaseMessage]]
) -> ChatPromptTemplate:
input_vars = set()
for message in messages:
if isinstance(message, BaseMessagePromptTemplate):
input_vars.update(message.input_variables)
return cls(input_variables=list(input_vars), messages=messages)
def format(self, **kwargs: Any) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,431 | ChatPromptTemplate with partial variables is giving validation error | ### System Info
langchain-0.0.205, python3.10
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Write this into Notebook cell
2. `from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
chat_prompt = ChatPromptTemplate(
messages=[
HumanMessagePromptTemplate.from_template("Do something with {question} using {context} giving it like {formatins}")
],
input_variables=["question", "context"],
partial_variables={"formatins": "some structure"}
)
`
3. It it throwing following error:
`Error:
ValidationError: 1 validation error for ChatPromptTemplate __root__ Got mismatched input_variables. Expected: {'formatins', 'question', 'context'}. Got: ['question', 'context'] (type=value_error)`
4. This was working until 24 hours ago. Potentially related to recent commit to langchain/prompts/chat.py.
### Expected behavior
The chat_prompt should get created with the partial variables injected.
If this is expected change, can you please help with suggesting what should be the new way to use partial_variables?
Thanks | https://github.com/langchain-ai/langchain/issues/6431 | https://github.com/langchain-ai/langchain/pull/6456 | 02c0a1e77eb9636850c8c29da33885a32b4cc2eb | 6efd5fa2b9d46c7b4db6ad638097f010b745f0cc | "2023-06-19T16:15:49Z" | python | "2023-06-20T05:08:15Z" | langchain/prompts/chat.py | return self.format_prompt(**kwargs).to_string()
def format_messages(self, **kwargs: Any) -> List[BaseMessage]:
kwargs = self._merge_partial_and_user_variables(**kwargs)
result = []
for message_template in self.messages:
if isinstance(message_template, BaseMessage):
result.extend([message_template])
elif isinstance(message_template, BaseMessagePromptTemplate):
rel_params = {
k: v
for k, v in kwargs.items()
if k in message_template.input_variables
}
message = message_template.format_messages(**rel_params)
result.extend(message)
else:
raise ValueError(f"Unexpected input: {message_template}")
return result
def partial(self, **kwargs: Union[str, Callable[[], str]]) -> BasePromptTemplate:
raise NotImplementedError
@property
def _prompt_type(self) -> str:
return "chat"
def save(self, file_path: Union[Path, str]) -> None:
raise NotImplementedError |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,431 | ChatPromptTemplate with partial variables is giving validation error | ### System Info
langchain-0.0.205, python3.10
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Write this into Notebook cell
2. `from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
chat_prompt = ChatPromptTemplate(
messages=[
HumanMessagePromptTemplate.from_template("Do something with {question} using {context} giving it like {formatins}")
],
input_variables=["question", "context"],
partial_variables={"formatins": "some structure"}
)
`
3. It it throwing following error:
`Error:
ValidationError: 1 validation error for ChatPromptTemplate __root__ Got mismatched input_variables. Expected: {'formatins', 'question', 'context'}. Got: ['question', 'context'] (type=value_error)`
4. This was working until 24 hours ago. Potentially related to recent commit to langchain/prompts/chat.py.
### Expected behavior
The chat_prompt should get created with the partial variables injected.
If this is expected change, can you please help with suggesting what should be the new way to use partial_variables?
Thanks | https://github.com/langchain-ai/langchain/issues/6431 | https://github.com/langchain-ai/langchain/pull/6456 | 02c0a1e77eb9636850c8c29da33885a32b4cc2eb | 6efd5fa2b9d46c7b4db6ad638097f010b745f0cc | "2023-06-19T16:15:49Z" | python | "2023-06-20T05:08:15Z" | tests/unit_tests/prompts/test_chat.py | from pathlib import Path
from typing import List
import pytest
from langchain.prompts import PromptTemplate
from langchain.prompts.chat import (
AIMessagePromptTemplate,
BaseMessagePromptTemplate,
ChatMessagePromptTemplate,
ChatPromptTemplate,
ChatPromptValue,
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
)
from langchain.schema import HumanMessage
def create_messages() -> List[BaseMessagePromptTemplate]:
"""Create messages."""
system_message_prompt = SystemMessagePromptTemplate(
prompt=PromptTemplate(
template="Here's some context: {context}",
input_variables=["context"],
)
)
human_message_prompt = HumanMessagePromptTemplate(
prompt=PromptTemplate(
template="Hello {foo}, I'm {bar}. Thanks for the {context}",
input_variables=["foo", "bar", "context"],
)
)
ai_message_prompt = AIMessagePromptTemplate(
prompt=PromptTemplate( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,431 | ChatPromptTemplate with partial variables is giving validation error | ### System Info
langchain-0.0.205, python3.10
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Write this into Notebook cell
2. `from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
chat_prompt = ChatPromptTemplate(
messages=[
HumanMessagePromptTemplate.from_template("Do something with {question} using {context} giving it like {formatins}")
],
input_variables=["question", "context"],
partial_variables={"formatins": "some structure"}
)
`
3. It it throwing following error:
`Error:
ValidationError: 1 validation error for ChatPromptTemplate __root__ Got mismatched input_variables. Expected: {'formatins', 'question', 'context'}. Got: ['question', 'context'] (type=value_error)`
4. This was working until 24 hours ago. Potentially related to recent commit to langchain/prompts/chat.py.
### Expected behavior
The chat_prompt should get created with the partial variables injected.
If this is expected change, can you please help with suggesting what should be the new way to use partial_variables?
Thanks | https://github.com/langchain-ai/langchain/issues/6431 | https://github.com/langchain-ai/langchain/pull/6456 | 02c0a1e77eb9636850c8c29da33885a32b4cc2eb | 6efd5fa2b9d46c7b4db6ad638097f010b745f0cc | "2023-06-19T16:15:49Z" | python | "2023-06-20T05:08:15Z" | tests/unit_tests/prompts/test_chat.py | template="I'm an AI. I'm {foo}. I'm {bar}.",
input_variables=["foo", "bar"],
)
)
chat_message_prompt = ChatMessagePromptTemplate(
role="test",
prompt=PromptTemplate(
template="I'm a generic message. I'm {foo}. I'm {bar}.",
input_variables=["foo", "bar"],
),
)
return [
system_message_prompt,
human_message_prompt,
ai_message_prompt,
chat_message_prompt,
]
def create_chat_prompt_template() -> ChatPromptTemplate:
"""Create a chat prompt template."""
return ChatPromptTemplate(
input_variables=["foo", "bar", "context"],
messages=create_messages(),
)
def test_create_chat_prompt_template_from_template() -> None:
"""Create a chat prompt template."""
prompt = ChatPromptTemplate.from_template("hi {foo} {bar}")
assert prompt.messages == [
HumanMessagePromptTemplate.from_template("hi {foo} {bar}")
]
def test_create_chat_prompt_template_from_template_partial() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,431 | ChatPromptTemplate with partial variables is giving validation error | ### System Info
langchain-0.0.205, python3.10
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Write this into Notebook cell
2. `from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
chat_prompt = ChatPromptTemplate(
messages=[
HumanMessagePromptTemplate.from_template("Do something with {question} using {context} giving it like {formatins}")
],
input_variables=["question", "context"],
partial_variables={"formatins": "some structure"}
)
`
3. It it throwing following error:
`Error:
ValidationError: 1 validation error for ChatPromptTemplate __root__ Got mismatched input_variables. Expected: {'formatins', 'question', 'context'}. Got: ['question', 'context'] (type=value_error)`
4. This was working until 24 hours ago. Potentially related to recent commit to langchain/prompts/chat.py.
### Expected behavior
The chat_prompt should get created with the partial variables injected.
If this is expected change, can you please help with suggesting what should be the new way to use partial_variables?
Thanks | https://github.com/langchain-ai/langchain/issues/6431 | https://github.com/langchain-ai/langchain/pull/6456 | 02c0a1e77eb9636850c8c29da33885a32b4cc2eb | 6efd5fa2b9d46c7b4db6ad638097f010b745f0cc | "2023-06-19T16:15:49Z" | python | "2023-06-20T05:08:15Z" | tests/unit_tests/prompts/test_chat.py | """Create a chat prompt template with partials."""
prompt = ChatPromptTemplate.from_template(
"hi {foo} {bar}", partial_variables={"foo": "jim"}
)
expected_prompt = PromptTemplate(
template="hi {foo} {bar}",
input_variables=["bar"],
partial_variables={"foo": "jim"},
)
assert len(prompt.messages) == 1
output_prompt = prompt.messages[0]
assert isinstance(output_prompt, HumanMessagePromptTemplate)
assert output_prompt.prompt == expected_prompt
def test_message_prompt_template_from_template_file() -> None:
expected = ChatMessagePromptTemplate(
prompt=PromptTemplate(
template="Question: {question}\nAnswer:", input_variables=["question"]
),
role="human",
)
actual = ChatMessagePromptTemplate.from_template_file(
Path(__file__).parent.parent / "data" / "prompt_file.txt",
["question"],
role="human",
)
assert expected == actual
def test_chat_prompt_template() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,431 | ChatPromptTemplate with partial variables is giving validation error | ### System Info
langchain-0.0.205, python3.10
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Write this into Notebook cell
2. `from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
chat_prompt = ChatPromptTemplate(
messages=[
HumanMessagePromptTemplate.from_template("Do something with {question} using {context} giving it like {formatins}")
],
input_variables=["question", "context"],
partial_variables={"formatins": "some structure"}
)
`
3. It it throwing following error:
`Error:
ValidationError: 1 validation error for ChatPromptTemplate __root__ Got mismatched input_variables. Expected: {'formatins', 'question', 'context'}. Got: ['question', 'context'] (type=value_error)`
4. This was working until 24 hours ago. Potentially related to recent commit to langchain/prompts/chat.py.
### Expected behavior
The chat_prompt should get created with the partial variables injected.
If this is expected change, can you please help with suggesting what should be the new way to use partial_variables?
Thanks | https://github.com/langchain-ai/langchain/issues/6431 | https://github.com/langchain-ai/langchain/pull/6456 | 02c0a1e77eb9636850c8c29da33885a32b4cc2eb | 6efd5fa2b9d46c7b4db6ad638097f010b745f0cc | "2023-06-19T16:15:49Z" | python | "2023-06-20T05:08:15Z" | tests/unit_tests/prompts/test_chat.py | """Test chat prompt template."""
prompt_template = create_chat_prompt_template()
prompt = prompt_template.format_prompt(foo="foo", bar="bar", context="context")
assert isinstance(prompt, ChatPromptValue)
messages = prompt.to_messages()
assert len(messages) == 4
assert messages[0].content == "Here's some context: context"
assert messages[1].content == "Hello foo, I'm bar. Thanks for the context"
assert messages[2].content == "I'm an AI. I'm foo. I'm bar."
assert messages[3].content == "I'm a generic message. I'm foo. I'm bar."
string = prompt.to_string()
expected = (
"System: Here's some context: context\n"
"Human: Hello foo, I'm bar. Thanks for the context\n"
"AI: I'm an AI. I'm foo. I'm bar.\n"
"test: I'm a generic message. I'm foo. I'm bar."
)
assert string == expected
string = prompt_template.format(foo="foo", bar="bar", context="context")
assert string == expected
def test_chat_prompt_template_from_messages() -> None:
"""Test creating a chat prompt template from messages."""
chat_prompt_template = ChatPromptTemplate.from_messages(create_messages())
assert sorted(chat_prompt_template.input_variables) == sorted(
["context", "foo", "bar"]
)
assert len(chat_prompt_template.messages) == 4
def test_chat_prompt_template_with_messages() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,431 | ChatPromptTemplate with partial variables is giving validation error | ### System Info
langchain-0.0.205, python3.10
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Write this into Notebook cell
2. `from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
chat_prompt = ChatPromptTemplate(
messages=[
HumanMessagePromptTemplate.from_template("Do something with {question} using {context} giving it like {formatins}")
],
input_variables=["question", "context"],
partial_variables={"formatins": "some structure"}
)
`
3. It it throwing following error:
`Error:
ValidationError: 1 validation error for ChatPromptTemplate __root__ Got mismatched input_variables. Expected: {'formatins', 'question', 'context'}. Got: ['question', 'context'] (type=value_error)`
4. This was working until 24 hours ago. Potentially related to recent commit to langchain/prompts/chat.py.
### Expected behavior
The chat_prompt should get created with the partial variables injected.
If this is expected change, can you please help with suggesting what should be the new way to use partial_variables?
Thanks | https://github.com/langchain-ai/langchain/issues/6431 | https://github.com/langchain-ai/langchain/pull/6456 | 02c0a1e77eb9636850c8c29da33885a32b4cc2eb | 6efd5fa2b9d46c7b4db6ad638097f010b745f0cc | "2023-06-19T16:15:49Z" | python | "2023-06-20T05:08:15Z" | tests/unit_tests/prompts/test_chat.py | messages = create_messages() + [HumanMessage(content="foo")]
chat_prompt_template = ChatPromptTemplate.from_messages(messages)
assert sorted(chat_prompt_template.input_variables) == sorted(
["context", "foo", "bar"]
)
assert len(chat_prompt_template.messages) == 5
prompt_value = chat_prompt_template.format_prompt(
context="see", foo="this", bar="magic"
)
prompt_value_messages = prompt_value.to_messages()
assert prompt_value_messages[-1] == HumanMessage(content="foo")
def test_chat_invalid_input_variables_extra() -> None:
messages = [HumanMessage(content="foo")]
with pytest.raises(ValueError):
ChatPromptTemplate(messages=messages, input_variables=["foo"])
def test_chat_invalid_input_variables_missing() -> None:
messages = [HumanMessagePromptTemplate.from_template("{foo}")]
with pytest.raises(ValueError):
ChatPromptTemplate(messages=messages, input_variables=[])
def test_infer_variables() -> None:
messages = [HumanMessagePromptTemplate.from_template("{foo}")]
prompt = ChatPromptTemplate(messages=messages)
assert prompt.input_variables == ["foo"] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,380 | Neo4J schema not inferred correctly by Neo4JGraph Object | ### System Info
langchain=0.0.2
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce behaviors:
1. Push the following dataset to neo4J (say in neo4J browser)
```
CREATE (la:LabelA {property_a: 'a'})
CREATE (lb:LabelB {property_b1: 123, property_b2: 'b2'})
CREATE (lc:LabelC)
MERGE (la)-[:REL_TYPE]-> (lb)
MERGE (la)-[:REL_TYPE {rel_prop: 'abc'}]-> (lc)
```
2. Instantiate a Neo4JGraphObject, connect and refresh schema
```
from langchain.graphs import Neo4jGraph
graph = Neo4jGraph(
url=NEO4J_URL,
username=NEO4J_USERNAME,
password=NEO4J_PASSWORD,
)
graph.refresh_schema()
print(graph.get_schema)
```
You will obtain
```
Node properties are the following:
[{'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [{'property': 'property_b2', 'type': 'STRING'}, {'property': 'property_b1', 'type': 'INTEGER'}], 'labels': 'LabelB'}]
Relationship properties are the following:
[{'type': 'REL_TYPE', 'properties': [{'property': 'rel_prop', 'type': 'STRING'}]}]
The relationships are the following:
['(:LabelA)-[:REL_TYPE]->(:LabelB)']
```
### Expected behavior
```
Node properties are the following:
[{'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [{'property': 'property_b2', 'type': 'STRING'}, {'property': 'property_b1', 'type': 'INTEGER'}], 'labels': 'LabelB'}]
Relationship properties are the following:
[{'type': 'REL_TYPE', 'properties': [{'property': 'rel_prop', 'type': 'STRING'}]}]
The relationships are the following:
['(:LabelA)-[:REL_TYPE]->(:LabelB)', '(:LabelA)-[:REL_TYPE]->(:LabelC)']
``` | https://github.com/langchain-ai/langchain/issues/6380 | https://github.com/langchain-ai/langchain/pull/6381 | b0d80c4b3e128f27bd1b9df48ed4afbe17950fec | 22601b0b6323e6465f78ca9bc16152062a2b65ba | "2023-06-18T19:19:04Z" | python | "2023-06-20T05:48:35Z" | langchain/graphs/neo4j_graph.py | from typing import Any, Dict, List
node_properties_query = """
CALL apoc.meta.data()
YIELD label, other, elementType, type, property
WHERE NOT type = "RELATIONSHIP" AND elementType = "node"
WITH label AS nodeLabels, collect({property:property, type:type}) AS properties
RETURN {labels: nodeLabels, properties: properties} AS output
"""
rel_properties_query = """
CALL apoc.meta.data()
YIELD label, other, elementType, type, property
WHERE NOT type = "RELATIONSHIP" AND elementType = "relationship"
WITH label AS nodeLabels, collect({property:property, type:type}) AS properties
RETURN {type: nodeLabels, properties: properties} AS output
"""
rel_query = """
CALL apoc.meta.data()
YIELD label, other, elementType, type, property
WHERE type = "RELATIONSHIP" AND elementType = "node"
RETURN "(:" + label + ")-[:" + property + "]->(:" + toString(other[0]) + ")" AS output
"""
class Neo4jGraph: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,380 | Neo4J schema not inferred correctly by Neo4JGraph Object | ### System Info
langchain=0.0.2
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce behaviors:
1. Push the following dataset to neo4J (say in neo4J browser)
```
CREATE (la:LabelA {property_a: 'a'})
CREATE (lb:LabelB {property_b1: 123, property_b2: 'b2'})
CREATE (lc:LabelC)
MERGE (la)-[:REL_TYPE]-> (lb)
MERGE (la)-[:REL_TYPE {rel_prop: 'abc'}]-> (lc)
```
2. Instantiate a Neo4JGraphObject, connect and refresh schema
```
from langchain.graphs import Neo4jGraph
graph = Neo4jGraph(
url=NEO4J_URL,
username=NEO4J_USERNAME,
password=NEO4J_PASSWORD,
)
graph.refresh_schema()
print(graph.get_schema)
```
You will obtain
```
Node properties are the following:
[{'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [{'property': 'property_b2', 'type': 'STRING'}, {'property': 'property_b1', 'type': 'INTEGER'}], 'labels': 'LabelB'}]
Relationship properties are the following:
[{'type': 'REL_TYPE', 'properties': [{'property': 'rel_prop', 'type': 'STRING'}]}]
The relationships are the following:
['(:LabelA)-[:REL_TYPE]->(:LabelB)']
```
### Expected behavior
```
Node properties are the following:
[{'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [{'property': 'property_b2', 'type': 'STRING'}, {'property': 'property_b1', 'type': 'INTEGER'}], 'labels': 'LabelB'}]
Relationship properties are the following:
[{'type': 'REL_TYPE', 'properties': [{'property': 'rel_prop', 'type': 'STRING'}]}]
The relationships are the following:
['(:LabelA)-[:REL_TYPE]->(:LabelB)', '(:LabelA)-[:REL_TYPE]->(:LabelC)']
``` | https://github.com/langchain-ai/langchain/issues/6380 | https://github.com/langchain-ai/langchain/pull/6381 | b0d80c4b3e128f27bd1b9df48ed4afbe17950fec | 22601b0b6323e6465f78ca9bc16152062a2b65ba | "2023-06-18T19:19:04Z" | python | "2023-06-20T05:48:35Z" | langchain/graphs/neo4j_graph.py | """Neo4j wrapper for graph operations."""
def __init__(
self, url: str, username: str, password: str, database: str = "neo4j"
) -> None:
"""Create a new Neo4j graph wrapper instance."""
try:
import neo4j
except ImportError:
raise ValueError( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,380 | Neo4J schema not inferred correctly by Neo4JGraph Object | ### System Info
langchain=0.0.2
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce behaviors:
1. Push the following dataset to neo4J (say in neo4J browser)
```
CREATE (la:LabelA {property_a: 'a'})
CREATE (lb:LabelB {property_b1: 123, property_b2: 'b2'})
CREATE (lc:LabelC)
MERGE (la)-[:REL_TYPE]-> (lb)
MERGE (la)-[:REL_TYPE {rel_prop: 'abc'}]-> (lc)
```
2. Instantiate a Neo4JGraphObject, connect and refresh schema
```
from langchain.graphs import Neo4jGraph
graph = Neo4jGraph(
url=NEO4J_URL,
username=NEO4J_USERNAME,
password=NEO4J_PASSWORD,
)
graph.refresh_schema()
print(graph.get_schema)
```
You will obtain
```
Node properties are the following:
[{'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [{'property': 'property_b2', 'type': 'STRING'}, {'property': 'property_b1', 'type': 'INTEGER'}], 'labels': 'LabelB'}]
Relationship properties are the following:
[{'type': 'REL_TYPE', 'properties': [{'property': 'rel_prop', 'type': 'STRING'}]}]
The relationships are the following:
['(:LabelA)-[:REL_TYPE]->(:LabelB)']
```
### Expected behavior
```
Node properties are the following:
[{'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [{'property': 'property_b2', 'type': 'STRING'}, {'property': 'property_b1', 'type': 'INTEGER'}], 'labels': 'LabelB'}]
Relationship properties are the following:
[{'type': 'REL_TYPE', 'properties': [{'property': 'rel_prop', 'type': 'STRING'}]}]
The relationships are the following:
['(:LabelA)-[:REL_TYPE]->(:LabelB)', '(:LabelA)-[:REL_TYPE]->(:LabelC)']
``` | https://github.com/langchain-ai/langchain/issues/6380 | https://github.com/langchain-ai/langchain/pull/6381 | b0d80c4b3e128f27bd1b9df48ed4afbe17950fec | 22601b0b6323e6465f78ca9bc16152062a2b65ba | "2023-06-18T19:19:04Z" | python | "2023-06-20T05:48:35Z" | langchain/graphs/neo4j_graph.py | "Could not import neo4j python package. "
"Please install it with `pip install neo4j`."
)
self._driver = neo4j.GraphDatabase.driver(url, auth=(username, password))
self._database = database
self.schema = ""
try:
self._driver.verify_connectivity()
except neo4j.exceptions.ServiceUnavailable:
raise ValueError(
"Could not connect to Neo4j database. "
"Please ensure that the url is correct"
)
except neo4j.exceptions.AuthError:
raise ValueError(
"Could not connect to Neo4j database. "
"Please ensure that the username and password are correct"
)
try:
self.refresh_schema()
except neo4j.exceptions.ClientError:
raise ValueError(
"Could not use APOC procedures. "
"Please ensure the APOC plugin is installed in Neo4j and that "
"'apoc.meta.data()' is allowed in Neo4j configuration "
)
@property
def get_schema(self) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,380 | Neo4J schema not inferred correctly by Neo4JGraph Object | ### System Info
langchain=0.0.2
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce behaviors:
1. Push the following dataset to neo4J (say in neo4J browser)
```
CREATE (la:LabelA {property_a: 'a'})
CREATE (lb:LabelB {property_b1: 123, property_b2: 'b2'})
CREATE (lc:LabelC)
MERGE (la)-[:REL_TYPE]-> (lb)
MERGE (la)-[:REL_TYPE {rel_prop: 'abc'}]-> (lc)
```
2. Instantiate a Neo4JGraphObject, connect and refresh schema
```
from langchain.graphs import Neo4jGraph
graph = Neo4jGraph(
url=NEO4J_URL,
username=NEO4J_USERNAME,
password=NEO4J_PASSWORD,
)
graph.refresh_schema()
print(graph.get_schema)
```
You will obtain
```
Node properties are the following:
[{'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [{'property': 'property_b2', 'type': 'STRING'}, {'property': 'property_b1', 'type': 'INTEGER'}], 'labels': 'LabelB'}]
Relationship properties are the following:
[{'type': 'REL_TYPE', 'properties': [{'property': 'rel_prop', 'type': 'STRING'}]}]
The relationships are the following:
['(:LabelA)-[:REL_TYPE]->(:LabelB)']
```
### Expected behavior
```
Node properties are the following:
[{'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [{'property': 'property_b2', 'type': 'STRING'}, {'property': 'property_b1', 'type': 'INTEGER'}], 'labels': 'LabelB'}]
Relationship properties are the following:
[{'type': 'REL_TYPE', 'properties': [{'property': 'rel_prop', 'type': 'STRING'}]}]
The relationships are the following:
['(:LabelA)-[:REL_TYPE]->(:LabelB)', '(:LabelA)-[:REL_TYPE]->(:LabelC)']
``` | https://github.com/langchain-ai/langchain/issues/6380 | https://github.com/langchain-ai/langchain/pull/6381 | b0d80c4b3e128f27bd1b9df48ed4afbe17950fec | 22601b0b6323e6465f78ca9bc16152062a2b65ba | "2023-06-18T19:19:04Z" | python | "2023-06-20T05:48:35Z" | langchain/graphs/neo4j_graph.py | """Returns the schema of the Neo4j database"""
return self.schema
def query(self, query: str, params: dict = {}) -> List[Dict[str, Any]]:
"""Query Neo4j database."""
from neo4j.exceptions import CypherSyntaxError
with self._driver.session(database=self._database) as session:
try:
data = session.run(query, params)
return [r.data() for r in data]
except CypherSyntaxError as e:
raise ValueError("Generated Cypher Statement is not valid\n" f"{e}")
def refresh_schema(self) -> None:
"""
Refreshes the Neo4j graph schema information.
"""
node_properties = self.query(node_properties_query)
relationships_properties = self.query(rel_properties_query)
relationships = self.query(rel_query)
self.schema = f"""
Node properties are the following:
{[el['output'] for el in node_properties]}
Relationship properties are the following:
{[el['output'] for el in relationships_properties]}
The relationships are the following:
{[el['output'] for el in relationships]}
""" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,380 | Neo4J schema not inferred correctly by Neo4JGraph Object | ### System Info
langchain=0.0.2
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce behaviors:
1. Push the following dataset to neo4J (say in neo4J browser)
```
CREATE (la:LabelA {property_a: 'a'})
CREATE (lb:LabelB {property_b1: 123, property_b2: 'b2'})
CREATE (lc:LabelC)
MERGE (la)-[:REL_TYPE]-> (lb)
MERGE (la)-[:REL_TYPE {rel_prop: 'abc'}]-> (lc)
```
2. Instantiate a Neo4JGraphObject, connect and refresh schema
```
from langchain.graphs import Neo4jGraph
graph = Neo4jGraph(
url=NEO4J_URL,
username=NEO4J_USERNAME,
password=NEO4J_PASSWORD,
)
graph.refresh_schema()
print(graph.get_schema)
```
You will obtain
```
Node properties are the following:
[{'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [{'property': 'property_b2', 'type': 'STRING'}, {'property': 'property_b1', 'type': 'INTEGER'}], 'labels': 'LabelB'}]
Relationship properties are the following:
[{'type': 'REL_TYPE', 'properties': [{'property': 'rel_prop', 'type': 'STRING'}]}]
The relationships are the following:
['(:LabelA)-[:REL_TYPE]->(:LabelB)']
```
### Expected behavior
```
Node properties are the following:
[{'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [{'property': 'property_b2', 'type': 'STRING'}, {'property': 'property_b1', 'type': 'INTEGER'}], 'labels': 'LabelB'}]
Relationship properties are the following:
[{'type': 'REL_TYPE', 'properties': [{'property': 'rel_prop', 'type': 'STRING'}]}]
The relationships are the following:
['(:LabelA)-[:REL_TYPE]->(:LabelB)', '(:LabelA)-[:REL_TYPE]->(:LabelC)']
``` | https://github.com/langchain-ai/langchain/issues/6380 | https://github.com/langchain-ai/langchain/pull/6381 | b0d80c4b3e128f27bd1b9df48ed4afbe17950fec | 22601b0b6323e6465f78ca9bc16152062a2b65ba | "2023-06-18T19:19:04Z" | python | "2023-06-20T05:48:35Z" | tests/integration_tests/chains/test_graph_database.py | """Test Graph Database Chain."""
import os
from langchain.chains.graph_qa.cypher import GraphCypherQAChain
from langchain.chains.loading import load_chain
from langchain.graphs import Neo4jGraph
from langchain.llms.openai import OpenAI
def test_connect_neo4j() -> None:
"""Test that Neo4j database is correctly instantiated and connected."""
url = os.environ.get("NEO4J_URL")
username = os.environ.get("NEO4J_USERNAME")
password = os.environ.get("NEO4J_PASSWORD")
assert url is not None
assert username is not None
assert password is not None
graph = Neo4jGraph(
url=url,
username=username,
password=password,
)
output = graph.query(
"""
RETURN "test" AS output
"""
)
expected_output = [{"output": "test"}]
assert output == expected_output
def test_cypher_generating_run() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,380 | Neo4J schema not inferred correctly by Neo4JGraph Object | ### System Info
langchain=0.0.2
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce behaviors:
1. Push the following dataset to neo4J (say in neo4J browser)
```
CREATE (la:LabelA {property_a: 'a'})
CREATE (lb:LabelB {property_b1: 123, property_b2: 'b2'})
CREATE (lc:LabelC)
MERGE (la)-[:REL_TYPE]-> (lb)
MERGE (la)-[:REL_TYPE {rel_prop: 'abc'}]-> (lc)
```
2. Instantiate a Neo4JGraphObject, connect and refresh schema
```
from langchain.graphs import Neo4jGraph
graph = Neo4jGraph(
url=NEO4J_URL,
username=NEO4J_USERNAME,
password=NEO4J_PASSWORD,
)
graph.refresh_schema()
print(graph.get_schema)
```
You will obtain
```
Node properties are the following:
[{'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [{'property': 'property_b2', 'type': 'STRING'}, {'property': 'property_b1', 'type': 'INTEGER'}], 'labels': 'LabelB'}]
Relationship properties are the following:
[{'type': 'REL_TYPE', 'properties': [{'property': 'rel_prop', 'type': 'STRING'}]}]
The relationships are the following:
['(:LabelA)-[:REL_TYPE]->(:LabelB)']
```
### Expected behavior
```
Node properties are the following:
[{'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [{'property': 'property_b2', 'type': 'STRING'}, {'property': 'property_b1', 'type': 'INTEGER'}], 'labels': 'LabelB'}]
Relationship properties are the following:
[{'type': 'REL_TYPE', 'properties': [{'property': 'rel_prop', 'type': 'STRING'}]}]
The relationships are the following:
['(:LabelA)-[:REL_TYPE]->(:LabelB)', '(:LabelA)-[:REL_TYPE]->(:LabelC)']
``` | https://github.com/langchain-ai/langchain/issues/6380 | https://github.com/langchain-ai/langchain/pull/6381 | b0d80c4b3e128f27bd1b9df48ed4afbe17950fec | 22601b0b6323e6465f78ca9bc16152062a2b65ba | "2023-06-18T19:19:04Z" | python | "2023-06-20T05:48:35Z" | tests/integration_tests/chains/test_graph_database.py | """Test that Cypher statement is correctly generated and executed."""
url = os.environ.get("NEO4J_URL")
username = os.environ.get("NEO4J_USERNAME")
password = os.environ.get("NEO4J_PASSWORD")
assert url is not None
assert username is not None
assert password is not None
graph = Neo4jGraph(
url=url,
username=username,
password=password,
)
graph.query("MATCH (n) DETACH DELETE n")
graph.query(
"CREATE (a:Actor {name:'Bruce Willis'})"
"-[:ACTED_IN]->(:Movie {title: 'Pulp Fiction'})"
)
graph.refresh_schema()
chain = GraphCypherQAChain.from_llm(OpenAI(temperature=0), graph=graph)
output = chain.run("Who played in Pulp Fiction?")
expected_output = " Bruce Willis played in Pulp Fiction."
assert output == expected_output
def test_cypher_top_k() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,380 | Neo4J schema not inferred correctly by Neo4JGraph Object | ### System Info
langchain=0.0.2
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce behaviors:
1. Push the following dataset to neo4J (say in neo4J browser)
```
CREATE (la:LabelA {property_a: 'a'})
CREATE (lb:LabelB {property_b1: 123, property_b2: 'b2'})
CREATE (lc:LabelC)
MERGE (la)-[:REL_TYPE]-> (lb)
MERGE (la)-[:REL_TYPE {rel_prop: 'abc'}]-> (lc)
```
2. Instantiate a Neo4JGraphObject, connect and refresh schema
```
from langchain.graphs import Neo4jGraph
graph = Neo4jGraph(
url=NEO4J_URL,
username=NEO4J_USERNAME,
password=NEO4J_PASSWORD,
)
graph.refresh_schema()
print(graph.get_schema)
```
You will obtain
```
Node properties are the following:
[{'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [{'property': 'property_b2', 'type': 'STRING'}, {'property': 'property_b1', 'type': 'INTEGER'}], 'labels': 'LabelB'}]
Relationship properties are the following:
[{'type': 'REL_TYPE', 'properties': [{'property': 'rel_prop', 'type': 'STRING'}]}]
The relationships are the following:
['(:LabelA)-[:REL_TYPE]->(:LabelB)']
```
### Expected behavior
```
Node properties are the following:
[{'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [{'property': 'property_b2', 'type': 'STRING'}, {'property': 'property_b1', 'type': 'INTEGER'}], 'labels': 'LabelB'}]
Relationship properties are the following:
[{'type': 'REL_TYPE', 'properties': [{'property': 'rel_prop', 'type': 'STRING'}]}]
The relationships are the following:
['(:LabelA)-[:REL_TYPE]->(:LabelB)', '(:LabelA)-[:REL_TYPE]->(:LabelC)']
``` | https://github.com/langchain-ai/langchain/issues/6380 | https://github.com/langchain-ai/langchain/pull/6381 | b0d80c4b3e128f27bd1b9df48ed4afbe17950fec | 22601b0b6323e6465f78ca9bc16152062a2b65ba | "2023-06-18T19:19:04Z" | python | "2023-06-20T05:48:35Z" | tests/integration_tests/chains/test_graph_database.py | """Test top_k parameter correctly limits the number of results in the context."""
url = os.environ.get("NEO4J_URL")
username = os.environ.get("NEO4J_USERNAME")
password = os.environ.get("NEO4J_PASSWORD")
assert url is not None
assert username is not None
assert password is not None
TOP_K = 1
graph = Neo4jGraph(
url=url,
username=username,
password=password,
)
graph.query("MATCH (n) DETACH DELETE n")
graph.query(
"CREATE (a:Actor {name:'Bruce Willis'})"
"-[:ACTED_IN]->(:Movie {title: 'Pulp Fiction'})"
"<-[:ACTED_IN]-(:Actor {name:'Foo'})"
)
graph.refresh_schema()
chain = GraphCypherQAChain.from_llm(
OpenAI(temperature=0), graph=graph, return_direct=True, top_k=TOP_K
)
output = chain.run("Who played in Pulp Fiction?")
assert len(output) == TOP_K
def test_cypher_intermediate_steps() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,380 | Neo4J schema not inferred correctly by Neo4JGraph Object | ### System Info
langchain=0.0.2
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce behaviors:
1. Push the following dataset to neo4J (say in neo4J browser)
```
CREATE (la:LabelA {property_a: 'a'})
CREATE (lb:LabelB {property_b1: 123, property_b2: 'b2'})
CREATE (lc:LabelC)
MERGE (la)-[:REL_TYPE]-> (lb)
MERGE (la)-[:REL_TYPE {rel_prop: 'abc'}]-> (lc)
```
2. Instantiate a Neo4JGraphObject, connect and refresh schema
```
from langchain.graphs import Neo4jGraph
graph = Neo4jGraph(
url=NEO4J_URL,
username=NEO4J_USERNAME,
password=NEO4J_PASSWORD,
)
graph.refresh_schema()
print(graph.get_schema)
```
You will obtain
```
Node properties are the following:
[{'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [{'property': 'property_b2', 'type': 'STRING'}, {'property': 'property_b1', 'type': 'INTEGER'}], 'labels': 'LabelB'}]
Relationship properties are the following:
[{'type': 'REL_TYPE', 'properties': [{'property': 'rel_prop', 'type': 'STRING'}]}]
The relationships are the following:
['(:LabelA)-[:REL_TYPE]->(:LabelB)']
```
### Expected behavior
```
Node properties are the following:
[{'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [{'property': 'property_b2', 'type': 'STRING'}, {'property': 'property_b1', 'type': 'INTEGER'}], 'labels': 'LabelB'}]
Relationship properties are the following:
[{'type': 'REL_TYPE', 'properties': [{'property': 'rel_prop', 'type': 'STRING'}]}]
The relationships are the following:
['(:LabelA)-[:REL_TYPE]->(:LabelB)', '(:LabelA)-[:REL_TYPE]->(:LabelC)']
``` | https://github.com/langchain-ai/langchain/issues/6380 | https://github.com/langchain-ai/langchain/pull/6381 | b0d80c4b3e128f27bd1b9df48ed4afbe17950fec | 22601b0b6323e6465f78ca9bc16152062a2b65ba | "2023-06-18T19:19:04Z" | python | "2023-06-20T05:48:35Z" | tests/integration_tests/chains/test_graph_database.py | """Test the returning of the intermediate steps."""
url = os.environ.get("NEO4J_URL")
username = os.environ.get("NEO4J_USERNAME")
password = os.environ.get("NEO4J_PASSWORD")
assert url is not None
assert username is not None
assert password is not None |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,380 | Neo4J schema not inferred correctly by Neo4JGraph Object | ### System Info
langchain=0.0.2
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce behaviors:
1. Push the following dataset to neo4J (say in neo4J browser)
```
CREATE (la:LabelA {property_a: 'a'})
CREATE (lb:LabelB {property_b1: 123, property_b2: 'b2'})
CREATE (lc:LabelC)
MERGE (la)-[:REL_TYPE]-> (lb)
MERGE (la)-[:REL_TYPE {rel_prop: 'abc'}]-> (lc)
```
2. Instantiate a Neo4JGraphObject, connect and refresh schema
```
from langchain.graphs import Neo4jGraph
graph = Neo4jGraph(
url=NEO4J_URL,
username=NEO4J_USERNAME,
password=NEO4J_PASSWORD,
)
graph.refresh_schema()
print(graph.get_schema)
```
You will obtain
```
Node properties are the following:
[{'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [{'property': 'property_b2', 'type': 'STRING'}, {'property': 'property_b1', 'type': 'INTEGER'}], 'labels': 'LabelB'}]
Relationship properties are the following:
[{'type': 'REL_TYPE', 'properties': [{'property': 'rel_prop', 'type': 'STRING'}]}]
The relationships are the following:
['(:LabelA)-[:REL_TYPE]->(:LabelB)']
```
### Expected behavior
```
Node properties are the following:
[{'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [{'property': 'property_b2', 'type': 'STRING'}, {'property': 'property_b1', 'type': 'INTEGER'}], 'labels': 'LabelB'}]
Relationship properties are the following:
[{'type': 'REL_TYPE', 'properties': [{'property': 'rel_prop', 'type': 'STRING'}]}]
The relationships are the following:
['(:LabelA)-[:REL_TYPE]->(:LabelB)', '(:LabelA)-[:REL_TYPE]->(:LabelC)']
``` | https://github.com/langchain-ai/langchain/issues/6380 | https://github.com/langchain-ai/langchain/pull/6381 | b0d80c4b3e128f27bd1b9df48ed4afbe17950fec | 22601b0b6323e6465f78ca9bc16152062a2b65ba | "2023-06-18T19:19:04Z" | python | "2023-06-20T05:48:35Z" | tests/integration_tests/chains/test_graph_database.py | graph = Neo4jGraph(
url=url,
username=username,
password=password,
)
graph.query("MATCH (n) DETACH DELETE n")
graph.query(
"CREATE (a:Actor {name:'Bruce Willis'})"
"-[:ACTED_IN]->(:Movie {title: 'Pulp Fiction'})"
)
graph.refresh_schema()
chain = GraphCypherQAChain.from_llm(
OpenAI(temperature=0), graph=graph, return_intermediate_steps=True
)
output = chain("Who played in Pulp Fiction?")
expected_output = " Bruce Willis played in Pulp Fiction."
assert output["result"] == expected_output
query = output["intermediate_steps"][0]["query"]
expected_query = (
"\n\nMATCH (a:Actor)-[:ACTED_IN]->"
"(m:Movie {title: 'Pulp Fiction'}) RETURN a.name"
)
assert query == expected_query
context = output["intermediate_steps"][1]["context"]
expected_context = [{"a.name": "Bruce Willis"}]
assert context == expected_context
def test_cypher_return_direct() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,380 | Neo4J schema not inferred correctly by Neo4JGraph Object | ### System Info
langchain=0.0.2
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce behaviors:
1. Push the following dataset to neo4J (say in neo4J browser)
```
CREATE (la:LabelA {property_a: 'a'})
CREATE (lb:LabelB {property_b1: 123, property_b2: 'b2'})
CREATE (lc:LabelC)
MERGE (la)-[:REL_TYPE]-> (lb)
MERGE (la)-[:REL_TYPE {rel_prop: 'abc'}]-> (lc)
```
2. Instantiate a Neo4JGraphObject, connect and refresh schema
```
from langchain.graphs import Neo4jGraph
graph = Neo4jGraph(
url=NEO4J_URL,
username=NEO4J_USERNAME,
password=NEO4J_PASSWORD,
)
graph.refresh_schema()
print(graph.get_schema)
```
You will obtain
```
Node properties are the following:
[{'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [{'property': 'property_b2', 'type': 'STRING'}, {'property': 'property_b1', 'type': 'INTEGER'}], 'labels': 'LabelB'}]
Relationship properties are the following:
[{'type': 'REL_TYPE', 'properties': [{'property': 'rel_prop', 'type': 'STRING'}]}]
The relationships are the following:
['(:LabelA)-[:REL_TYPE]->(:LabelB)']
```
### Expected behavior
```
Node properties are the following:
[{'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [{'property': 'property_b2', 'type': 'STRING'}, {'property': 'property_b1', 'type': 'INTEGER'}], 'labels': 'LabelB'}]
Relationship properties are the following:
[{'type': 'REL_TYPE', 'properties': [{'property': 'rel_prop', 'type': 'STRING'}]}]
The relationships are the following:
['(:LabelA)-[:REL_TYPE]->(:LabelB)', '(:LabelA)-[:REL_TYPE]->(:LabelC)']
``` | https://github.com/langchain-ai/langchain/issues/6380 | https://github.com/langchain-ai/langchain/pull/6381 | b0d80c4b3e128f27bd1b9df48ed4afbe17950fec | 22601b0b6323e6465f78ca9bc16152062a2b65ba | "2023-06-18T19:19:04Z" | python | "2023-06-20T05:48:35Z" | tests/integration_tests/chains/test_graph_database.py | """Test that chain returns direct results."""
url = os.environ.get("NEO4J_URL")
username = os.environ.get("NEO4J_USERNAME")
password = os.environ.get("NEO4J_PASSWORD")
assert url is not None
assert username is not None
assert password is not None
graph = Neo4jGraph(
url=url,
username=username,
password=password,
)
graph.query("MATCH (n) DETACH DELETE n")
graph.query(
"CREATE (a:Actor {name:'Bruce Willis'})"
"-[:ACTED_IN]->(:Movie {title: 'Pulp Fiction'})"
)
graph.refresh_schema()
chain = GraphCypherQAChain.from_llm(
OpenAI(temperature=0), graph=graph, return_direct=True
)
output = chain.run("Who played in Pulp Fiction?")
expected_output = [{"a.name": "Bruce Willis"}]
assert output == expected_output
def test_cypher_save_load() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,380 | Neo4J schema not inferred correctly by Neo4JGraph Object | ### System Info
langchain=0.0.2
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce behaviors:
1. Push the following dataset to neo4J (say in neo4J browser)
```
CREATE (la:LabelA {property_a: 'a'})
CREATE (lb:LabelB {property_b1: 123, property_b2: 'b2'})
CREATE (lc:LabelC)
MERGE (la)-[:REL_TYPE]-> (lb)
MERGE (la)-[:REL_TYPE {rel_prop: 'abc'}]-> (lc)
```
2. Instantiate a Neo4JGraphObject, connect and refresh schema
```
from langchain.graphs import Neo4jGraph
graph = Neo4jGraph(
url=NEO4J_URL,
username=NEO4J_USERNAME,
password=NEO4J_PASSWORD,
)
graph.refresh_schema()
print(graph.get_schema)
```
You will obtain
```
Node properties are the following:
[{'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [{'property': 'property_b2', 'type': 'STRING'}, {'property': 'property_b1', 'type': 'INTEGER'}], 'labels': 'LabelB'}]
Relationship properties are the following:
[{'type': 'REL_TYPE', 'properties': [{'property': 'rel_prop', 'type': 'STRING'}]}]
The relationships are the following:
['(:LabelA)-[:REL_TYPE]->(:LabelB)']
```
### Expected behavior
```
Node properties are the following:
[{'properties': [{'property': 'property_a', 'type': 'STRING'}], 'labels': 'LabelA'}, {'properties': [{'property': 'property_b2', 'type': 'STRING'}, {'property': 'property_b1', 'type': 'INTEGER'}], 'labels': 'LabelB'}]
Relationship properties are the following:
[{'type': 'REL_TYPE', 'properties': [{'property': 'rel_prop', 'type': 'STRING'}]}]
The relationships are the following:
['(:LabelA)-[:REL_TYPE]->(:LabelB)', '(:LabelA)-[:REL_TYPE]->(:LabelC)']
``` | https://github.com/langchain-ai/langchain/issues/6380 | https://github.com/langchain-ai/langchain/pull/6381 | b0d80c4b3e128f27bd1b9df48ed4afbe17950fec | 22601b0b6323e6465f78ca9bc16152062a2b65ba | "2023-06-18T19:19:04Z" | python | "2023-06-20T05:48:35Z" | tests/integration_tests/chains/test_graph_database.py | """Test saving and loading."""
FILE_PATH = "cypher.yaml"
url = os.environ.get("NEO4J_URL")
username = os.environ.get("NEO4J_USERNAME")
password = os.environ.get("NEO4J_PASSWORD")
assert url is not None
assert username is not None
assert password is not None
graph = Neo4jGraph(
url=url,
username=username,
password=password,
)
chain = GraphCypherQAChain.from_llm(
OpenAI(temperature=0), graph=graph, return_direct=True
)
chain.save(file_path=FILE_PATH)
qa_loaded = load_chain(FILE_PATH, graph=graph)
assert qa_loaded == chain |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,234 | Gmail toolkit cannot handle sending email to one person correctly | ### System Info
Gmail toolkit cannot handle sending email to one person correctly - if I want to send email to one person it doesn't consider that `action_input` should look like:
```
{
...
to: ["[email protected]"]
...
}
```
Instead it look like:
```
{
...
to: "[email protected]"
...
}
```
It causes error with `To` header - it provides list of letters to Gmail API - ["e", "m", ...].
Error:
```
<HttpError 400 when requesting https://gmail.googleapis.com/gmail/v1/users/me/messages/send?alt=json returned "Invalid To header". Details: "[{'message': 'Invalid To header', 'domain': 'global', 'reason': 'invalidArgument'}]">
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Ask agent to send email to person using GmailToolkit tools.
### Expected behavior
To always use list of emails in `To` header. | https://github.com/langchain-ai/langchain/issues/6234 | https://github.com/langchain-ai/langchain/pull/6242 | 94c789925798053c08ad8cc262b23f2683abd4d2 | 5d149e4d50325d2821263e59bac667f781c48f7a | "2023-06-15T15:30:50Z" | python | "2023-06-21T08:25:49Z" | langchain/tools/gmail/send_message.py | """Send Gmail messages."""
import base64
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from typing import Any, Dict, List, Optional
from pydantic import BaseModel, Field
from langchain.callbacks.manager import (
AsyncCallbackManagerForToolRun,
CallbackManagerForToolRun,
)
from langchain.tools.gmail.base import GmailBaseTool
class SendMessageSchema(BaseModel): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,234 | Gmail toolkit cannot handle sending email to one person correctly | ### System Info
Gmail toolkit cannot handle sending email to one person correctly - if I want to send email to one person it doesn't consider that `action_input` should look like:
```
{
...
to: ["[email protected]"]
...
}
```
Instead it look like:
```
{
...
to: "[email protected]"
...
}
```
It causes error with `To` header - it provides list of letters to Gmail API - ["e", "m", ...].
Error:
```
<HttpError 400 when requesting https://gmail.googleapis.com/gmail/v1/users/me/messages/send?alt=json returned "Invalid To header". Details: "[{'message': 'Invalid To header', 'domain': 'global', 'reason': 'invalidArgument'}]">
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Ask agent to send email to person using GmailToolkit tools.
### Expected behavior
To always use list of emails in `To` header. | https://github.com/langchain-ai/langchain/issues/6234 | https://github.com/langchain-ai/langchain/pull/6242 | 94c789925798053c08ad8cc262b23f2683abd4d2 | 5d149e4d50325d2821263e59bac667f781c48f7a | "2023-06-15T15:30:50Z" | python | "2023-06-21T08:25:49Z" | langchain/tools/gmail/send_message.py | message: str = Field(
...,
description="The message to send.",
)
to: List[str] = Field(
...,
description="The list of recipients.",
)
subject: str = Field(
...,
description="The subject of the message.",
)
cc: Optional[List[str]] = Field(
None,
description="The list of CC recipients.",
)
bcc: Optional[List[str]] = Field(
None,
description="The list of BCC recipients.",
)
class GmailSendMessage(GmailBaseTool): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,234 | Gmail toolkit cannot handle sending email to one person correctly | ### System Info
Gmail toolkit cannot handle sending email to one person correctly - if I want to send email to one person it doesn't consider that `action_input` should look like:
```
{
...
to: ["[email protected]"]
...
}
```
Instead it look like:
```
{
...
to: "[email protected]"
...
}
```
It causes error with `To` header - it provides list of letters to Gmail API - ["e", "m", ...].
Error:
```
<HttpError 400 when requesting https://gmail.googleapis.com/gmail/v1/users/me/messages/send?alt=json returned "Invalid To header". Details: "[{'message': 'Invalid To header', 'domain': 'global', 'reason': 'invalidArgument'}]">
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Ask agent to send email to person using GmailToolkit tools.
### Expected behavior
To always use list of emails in `To` header. | https://github.com/langchain-ai/langchain/issues/6234 | https://github.com/langchain-ai/langchain/pull/6242 | 94c789925798053c08ad8cc262b23f2683abd4d2 | 5d149e4d50325d2821263e59bac667f781c48f7a | "2023-06-15T15:30:50Z" | python | "2023-06-21T08:25:49Z" | langchain/tools/gmail/send_message.py | name: str = "send_gmail_message"
description: str = (
"Use this tool to send email messages." " The input is the message, recipents"
)
def _prepare_message(
self,
message: str,
to: List[str],
subject: str,
cc: Optional[List[str]] = None,
bcc: Optional[List[str]] = None,
) -> Dict[str, Any]:
"""Create a message for an email."""
mime_message = MIMEMultipart()
mime_message.attach(MIMEText(message, "html"))
mime_message["To"] = ", ".join(to)
mime_message["Subject"] = subject
if cc is not None:
mime_message["Cc"] = ", ".join(cc)
if bcc is not None:
mime_message["Bcc"] = ", ".join(bcc)
encoded_message = base64.urlsafe_b64encode(mime_message.as_bytes()).decode()
return {"raw": encoded_message}
def _run( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,234 | Gmail toolkit cannot handle sending email to one person correctly | ### System Info
Gmail toolkit cannot handle sending email to one person correctly - if I want to send email to one person it doesn't consider that `action_input` should look like:
```
{
...
to: ["[email protected]"]
...
}
```
Instead it look like:
```
{
...
to: "[email protected]"
...
}
```
It causes error with `To` header - it provides list of letters to Gmail API - ["e", "m", ...].
Error:
```
<HttpError 400 when requesting https://gmail.googleapis.com/gmail/v1/users/me/messages/send?alt=json returned "Invalid To header". Details: "[{'message': 'Invalid To header', 'domain': 'global', 'reason': 'invalidArgument'}]">
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Ask agent to send email to person using GmailToolkit tools.
### Expected behavior
To always use list of emails in `To` header. | https://github.com/langchain-ai/langchain/issues/6234 | https://github.com/langchain-ai/langchain/pull/6242 | 94c789925798053c08ad8cc262b23f2683abd4d2 | 5d149e4d50325d2821263e59bac667f781c48f7a | "2023-06-15T15:30:50Z" | python | "2023-06-21T08:25:49Z" | langchain/tools/gmail/send_message.py | self,
message: str,
to: List[str],
subject: str,
cc: Optional[List[str]] = None,
bcc: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForToolRun] = None,
) -> str:
"""Run the tool."""
try:
create_message = self._prepare_message(message, to, subject, cc=cc, bcc=bcc)
send_message = (
self.api_resource.users()
.messages()
.send(userId="me", body=create_message)
)
sent_message = send_message.execute()
return f'Message sent. Message Id: {sent_message["id"]}'
except Exception as error:
raise Exception(f"An error occurred: {error}")
async def _arun( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,234 | Gmail toolkit cannot handle sending email to one person correctly | ### System Info
Gmail toolkit cannot handle sending email to one person correctly - if I want to send email to one person it doesn't consider that `action_input` should look like:
```
{
...
to: ["[email protected]"]
...
}
```
Instead it look like:
```
{
...
to: "[email protected]"
...
}
```
It causes error with `To` header - it provides list of letters to Gmail API - ["e", "m", ...].
Error:
```
<HttpError 400 when requesting https://gmail.googleapis.com/gmail/v1/users/me/messages/send?alt=json returned "Invalid To header". Details: "[{'message': 'Invalid To header', 'domain': 'global', 'reason': 'invalidArgument'}]">
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Ask agent to send email to person using GmailToolkit tools.
### Expected behavior
To always use list of emails in `To` header. | https://github.com/langchain-ai/langchain/issues/6234 | https://github.com/langchain-ai/langchain/pull/6242 | 94c789925798053c08ad8cc262b23f2683abd4d2 | 5d149e4d50325d2821263e59bac667f781c48f7a | "2023-06-15T15:30:50Z" | python | "2023-06-21T08:25:49Z" | langchain/tools/gmail/send_message.py | self,
message: str,
to: List[str],
subject: str,
cc: Optional[List[str]] = None,
bcc: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
) -> str:
"""Run the tool asynchronously."""
raise NotImplementedError(f"The tool {self.name} does not support async yet.") |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | """Wrapper around OpenAI APIs."""
from __future__ import annotations
import logging
import sys
import warnings
from typing import (
AbstractSet,
Any,
Callable,
Collection,
Dict,
Generator,
List,
Literal,
Mapping,
Optional,
Set,
Tuple,
Union,
)
from pydantic import Field, root_validator
from tenacity import (
before_sleep_log,
retry, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | retry_if_exception_type,
stop_after_attempt,
wait_exponential,
)
from langchain.callbacks.manager import (
AsyncCallbackManagerForLLMRun,
CallbackManagerForLLMRun,
)
from langchain.llms.base import BaseLLM
from langchain.schema import Generation, LLMResult
from langchain.utils import get_from_dict_or_env
logger = logging.getLogger(__name__)
def update_token_usage(
keys: Set[str], response: Dict[str, Any], token_usage: Dict[str, Any]
) -> None:
"""Update token usage."""
_keys_to_use = keys.intersection(response["usage"])
for _key in _keys_to_use:
if _key not in token_usage:
token_usage[_key] = response["usage"][_key]
else:
token_usage[_key] += response["usage"][_key]
def _update_response(response: Dict[str, Any], stream_response: Dict[str, Any]) -> None:
"""Update response from the stream response."""
response["choices"][0]["text"] += stream_response["choices"][0]["text"]
response["choices"][0]["finish_reason"] = stream_response["choices"][0][
"finish_reason"
]
response["choices"][0]["logprobs"] = stream_response["choices"][0]["logprobs"]
def _streaming_response_template() -> Dict[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | return {
"choices": [
{
"text": "",
"finish_reason": None,
"logprobs": None,
}
]
}
def _create_retry_decorator(llm: Union[BaseOpenAI, OpenAIChat]) -> Callable[[Any], Any]:
import openai
min_seconds = 4
max_seconds = 10
return retry(
reraise=True,
stop=stop_after_attempt(llm.max_retries),
wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),
retry=(
retry_if_exception_type(openai.error.Timeout)
| retry_if_exception_type(openai.error.APIError)
| retry_if_exception_type(openai.error.APIConnectionError)
| retry_if_exception_type(openai.error.RateLimitError)
| retry_if_exception_type(openai.error.ServiceUnavailableError)
),
before_sleep=before_sleep_log(logger, logging.WARNING),
)
def completion_with_retry(llm: Union[BaseOpenAI, OpenAIChat], **kwargs: Any) -> Any: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | """Use tenacity to retry the completion call."""
retry_decorator = _create_retry_decorator(llm)
@retry_decorator
def _completion_with_retry(**kwargs: Any) -> Any:
return llm.client.create(**kwargs)
return _completion_with_retry(**kwargs)
async def acompletion_with_retry(
llm: Union[BaseOpenAI, OpenAIChat], **kwargs: Any
) -> Any:
"""Use tenacity to retry the async completion call."""
retry_decorator = _create_retry_decorator(llm)
@retry_decorator
async def _completion_with_retry(**kwargs: Any) -> Any:
return await llm.client.acreate(**kwargs)
return await _completion_with_retry(**kwargs)
class BaseOpenAI(BaseLLM): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | """Wrapper around OpenAI large language models."""
@property
def lc_secrets(self) -> Dict[str, str]:
return {"openai_api_key": "OPENAI_API_KEY"}
@property
def lc_serializable(self) -> bool:
return True
client: Any
model_name: str = Field("text-davinci-003", alias="model")
"""Model name to use."""
temperature: float = 0.7
"""What sampling temperature to use."""
max_tokens: int = 256
"""The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size."""
top_p: float = 1
"""Total probability mass of tokens to consider at each step."""
frequency_penalty: float = 0
"""Penalizes repeated tokens according to frequency."""
presence_penalty: float = 0
"""Penalizes repeated tokens."""
n: int = 1
"""How many completions to generate for each prompt."""
best_of: int = 1
"""Generates best_of completions server-side and returns the "best"."""
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
"""Holds any model parameters valid for `create` call not explicitly specified."""
openai_api_key: Optional[str] = None |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | openai_api_base: Optional[str] = None
openai_organization: Optional[str] = None
openai_proxy: Optional[str] = None
batch_size: int = 20
"""Batch size to use when passing multiple documents to generate."""
request_timeout: Optional[Union[float, Tuple[float, float]]] = None
"""Timeout for requests to OpenAI completion API. Default is 600 seconds."""
logit_bias: Optional[Dict[str, float]] = Field(default_factory=dict)
"""Adjust the probability of specific tokens being generated."""
max_retries: int = 6
"""Maximum number of retries to make when generating."""
streaming: bool = False
"""Whether to stream the results or not."""
allowed_special: Union[Literal["all"], AbstractSet[str]] = set()
"""Set of special tokens that are allowed。"""
disallowed_special: Union[Literal["all"], Collection[str]] = "all"
"""Set of special tokens that are not allowed。"""
def __new__(cls, **data: Any) -> Union[OpenAIChat, BaseOpenAI]: # ty
"""Initialize the OpenAI object."""
model_name = data.get("model_name", "")
if model_name.startswith("gpt-3.5-turbo") or model_name.startswith("gpt-4"):
warnings.warn(
"You are trying to use a chat model. This way of initializing it is "
"no longer supported. Instead, please use: "
"`from langchain.chat_models import ChatOpenAI`"
)
return OpenAIChat(**data)
return super().__new__(cls)
class Config: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | """Configuration for this pydantic object."""
allow_population_by_field_name = True
@root_validator(pre=True)
def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:
"""Build extra kwargs from additional params that were passed in."""
all_required_field_names = cls.all_required_field_names()
extra = values.get("model_kwargs", {})
for field_name in list(values):
if field_name in extra:
raise ValueError(f"Found {field_name} supplied twice.")
if field_name not in all_required_field_names:
logger.warning(
f"""WARNING! {field_name} is not default parameter.
{field_name} was transferred to model_kwargs.
Please confirm that {field_name} is what you intended."""
)
extra[field_name] = values.pop(field_name)
invalid_model_kwargs = all_required_field_names.intersection(extra.keys())
if invalid_model_kwargs:
raise ValueError(
f"Parameters {invalid_model_kwargs} should be specified explicitly. "
f"Instead they were passed in as part of `model_kwargs` parameter."
)
values["model_kwargs"] = extra
return values
@root_validator()
def validate_environment(cls, values: Dict) -> Dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | """Validate that api key and python package exists in environment."""
values["openai_api_key"] = get_from_dict_or_env(
values, "openai_api_key", "OPENAI_API_KEY"
)
values["openai_api_base"] = get_from_dict_or_env(
values,
"openai_api_base",
"OPENAI_API_BASE",
default="",
)
values["openai_proxy"] = get_from_dict_or_env(
values,
"openai_proxy",
"OPENAI_PROXY",
default="",
)
values["openai_organization"] = get_from_dict_or_env(
values,
"openai_organization",
"OPENAI_ORGANIZATION",
default="",
)
try:
import openai |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | values["client"] = openai.Completion
except ImportError:
raise ImportError(
"Could not import openai python package. "
"Please install it with `pip install openai`."
)
if values["streaming"] and values["n"] > 1:
raise ValueError("Cannot stream results when n > 1.")
if values["streaming"] and values["best_of"] > 1:
raise ValueError("Cannot stream results when best_of > 1.")
return values
@property
def _default_params(self) -> Dict[str, Any]:
"""Get the default parameters for calling OpenAI API."""
normal_params = {
"temperature": self.temperature,
"max_tokens": self.max_tokens,
"top_p": self.top_p,
"frequency_penalty": self.frequency_penalty,
"presence_penalty": self.presence_penalty,
"n": self.n,
"request_timeout": self.request_timeout,
"logit_bias": self.logit_bias,
}
# Az
# do
if self.best_of > 1:
normal_params["best_of"] = self.best_of
return {**normal_params, **self.model_kwargs}
def _generate( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | self,
prompts: List[str],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> LLMResult:
"""Call out to OpenAI's endpoint with k unique prompts.
Args:
prompts: The prompts to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:
The full LLM output.
Example:
.. code-block:: python
response = openai.generate(["Tell me a joke."])
"""
# TO
params = self._invocation_params
params = {**params, **kwargs}
sub_prompts = self.get_sub_prompts(params, prompts, stop)
choices = []
token_usage: Dict[str, int] = {}
# Ge
# In
_keys = {"completion_tokens", "prompt_tokens", "total_tokens"}
for _prompts in sub_prompts:
if self.streaming:
if len(_prompts) > 1:
raise ValueError("Cannot stream results with multiple prompts.")
params["stream"] = True |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | response = _streaming_response_template()
for stream_resp in completion_with_retry(
self, prompt=_prompts, **params
):
if run_manager:
run_manager.on_llm_new_token(
stream_resp["choices"][0]["text"],
verbose=self.verbose,
logprobs=stream_resp["choices"][0]["logprobs"],
)
_update_response(response, stream_resp)
choices.extend(response["choices"])
else:
response = completion_with_retry(self, prompt=_prompts, **params)
choices.extend(response["choices"])
if not self.streaming:
# Ca
update_token_usage(_keys, response, token_usage)
return self.create_llm_result(choices, prompts, token_usage)
async def _agenerate(
self,
prompts: List[str],
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> LLMResult:
"""Call out to OpenAI's endpoint async with k unique prompts."""
params = self._invocation_params
params = {**params, **kwargs}
sub_prompts = self.get_sub_prompts(params, prompts, stop) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | choices = []
token_usage: Dict[str, int] = {}
# Ge
# In
_keys = {"completion_tokens", "prompt_tokens", "total_tokens"}
for _prompts in sub_prompts:
if self.streaming:
if len(_prompts) > 1:
raise ValueError("Cannot stream results with multiple prompts.")
params["stream"] = True
response = _streaming_response_template()
async for stream_resp in await acompletion_with_retry(
self, prompt=_prompts, **params
):
if run_manager:
await run_manager.on_llm_new_token(
stream_resp["choices"][0]["text"],
verbose=self.verbose,
logprobs=stream_resp["choices"][0]["logprobs"],
)
_update_response(response, stream_resp)
choices.extend(response["choices"])
else:
response = await acompletion_with_retry(self, prompt=_prompts, **params)
choices.extend(response["choices"])
if not self.streaming:
# Ca
update_token_usage(_keys, response, token_usage)
return self.create_llm_result(choices, prompts, token_usage)
def get_sub_prompts( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | self,
params: Dict[str, Any],
prompts: List[str],
stop: Optional[List[str]] = None,
) -> List[List[str]]:
"""Get the sub prompts for llm call."""
if stop is not None:
if "stop" in params:
raise ValueError("`stop` found in both the input and default params.")
params["stop"] = stop
if params["max_tokens"] == -1:
if len(prompts) != 1:
raise ValueError(
"max_tokens set to -1 not supported for multiple inputs."
)
params["max_tokens"] = self.max_tokens_for_prompt(prompts[0])
sub_prompts = [
prompts[i : i + self.batch_size]
for i in range(0, len(prompts), self.batch_size)
]
return sub_prompts
def create_llm_result( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | self, choices: Any, prompts: List[str], token_usage: Dict[str, int]
) -> LLMResult:
"""Create the LLMResult from the choices and prompts."""
generations = []
for i, _ in enumerate(prompts):
sub_choices = choices[i * self.n : (i + 1) * self.n]
generations.append(
[
Generation(
text=choice["text"],
generation_info=dict(
finish_reason=choice.get("finish_reason"),
logprobs=choice.get("logprobs"),
),
)
for choice in sub_choices
]
)
llm_output = {"token_usage": token_usage, "model_name": self.model_name}
return LLMResult(generations=generations, llm_output=llm_output)
def stream(self, prompt: str, stop: Optional[List[str]] = None) -> Generator: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | """Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Args:
prompt: The prompts to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:
A generator representing the stream of tokens from OpenAI.
Example:
.. code-block:: python
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
"""
params = self.prep_streaming_params(stop)
generator = self.client.create(prompt=prompt, **params)
return generator
def prep_streaming_params(self, stop: Optional[List[str]] = None) -> Dict[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | """Prepare the params for streaming."""
params = self._invocation_params
if "best_of" in params and params["best_of"] != 1:
raise ValueError("OpenAI only supports best_of == 1 for streaming")
if stop is not None:
if "stop" in params:
raise ValueError("`stop` found in both the input and default params.")
params["stop"] = stop
params["stream"] = True
return params
@property
def _invocation_params(self) -> Dict[str, Any]:
"""Get the parameters used to invoke the model."""
openai_creds: Dict[str, Any] = {
"api_key": self.openai_api_key,
"api_base": self.openai_api_base,
"organization": self.openai_organization,
}
if self.openai_proxy:
import openai
openai.proxy = {"http": self.openai_proxy, "https": self.openai_proxy} # ty
return {**openai_creds, **self._default_params}
@property
def _identifying_params(self) -> Mapping[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | """Get the identifying parameters."""
return {**{"model_name": self.model_name}, **self._default_params}
@property
def _llm_type(self) -> str:
"""Return type of llm."""
return "openai"
def get_token_ids(self, text: str) -> List[int]:
"""Get the token IDs using the tiktoken package."""
# ti
if sys.version_info[1] < 8:
return super().get_num_tokens(text)
try:
import tiktoken
except ImportError:
raise ImportError(
"Could not import tiktoken python package. "
"This is needed in order to calculate get_num_tokens. "
"Please install it with `pip install tiktoken`."
)
enc = tiktoken.encoding_for_model(self.model_name)
return enc.encode(
text,
allowed_special=self.allowed_special,
disallowed_special=self.disallowed_special,
)
@staticmethod
def modelname_to_contextsize(modelname: str) -> int: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | """Calculate the maximum number of tokens possible to generate for a model.
Args:
modelname: The modelname we want to know the context size for.
Returns:
The maximum context size
Example:
.. code-block:: python
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
"""
model_token_mapping = {
"gpt-4": 8192,
"gpt-4-0314": 8192,
"gpt-4-32k": 32768,
"gpt-4-32k-0314": 32768,
"gpt-3.5-turbo": 4096,
"gpt-3.5-turbo-0301": 4096,
"text-ada-001": 2049,
"ada": 2049,
"text-babbage-001": 2040,
"babbage": 2049,
"text-curie-001": 2049,
"curie": 2049,
"davinci": 2049,
"text-davinci-003": 4097,
"text-davinci-002": 4097,
"code-davinci-002": 8001,
"code-davinci-001": 8001, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | "code-cushman-002": 2048,
"code-cushman-001": 2048,
}
# ha
if "ft-" in modelname:
modelname = modelname.split(":")[0]
context_size = model_token_mapping.get(modelname, None)
if context_size is None:
raise ValueError(
f"Unknown model: {modelname}. Please provide a valid OpenAI model name."
"Known models are: " + ", ".join(model_token_mapping.keys())
)
return context_size
@property
def max_context_size(self) -> int:
"""Get max context size for this model."""
return self.modelname_to_contextsize(self.model_name)
def max_tokens_for_prompt(self, prompt: str) -> int:
"""Calculate the maximum number of tokens possible to generate for a prompt.
Args:
prompt: The prompt to pass into the model.
Returns:
The maximum number of tokens to generate for a prompt.
Example:
.. code-block:: python
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
"""
num_tokens = self.get_num_tokens(prompt)
return self.max_context_size - num_tokens
class OpenAI(BaseOpenAI): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | """Wrapper around OpenAI large language models.
To use, you should have the ``openai`` python package installed, and the
environment variable ``OPENAI_API_KEY`` set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example:
.. code-block:: python
from langchain.llms import OpenAI
openai = OpenAI(model_name="text-davinci-003")
"""
@property
def _invocation_params(self) -> Dict[str, Any]:
return {**{"model": self.model_name}, **super()._invocation_params}
class AzureOpenAI(BaseOpenAI): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | """Wrapper around Azure-specific OpenAI large language models.
To use, you should have the ``openai`` python package installed, and the
environment variable ``OPENAI_API_KEY`` set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example:
.. code-block:: python
from langchain.llms import AzureOpenAI
openai = AzureOpenAI(model_name="text-davinci-003")
"""
deployment_name: str = ""
"""Deployment name to use."""
openai_api_type: str = "azure"
openai_api_version: str = ""
@root_validator()
def validate_azure_settings(cls, values: Dict) -> Dict:
values["openai_api_version"] = get_from_dict_or_env(
values,
"openai_api_version",
"OPENAI_API_VERSION",
)
values["openai_api_type"] = get_from_dict_or_env(
values,
"openai_api_type",
"OPENAI_API_TYPE",
)
return values
@property
def _identifying_params(self) -> Mapping[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | return {
**{"deployment_name": self.deployment_name},
**super()._identifying_params,
}
@property
def _invocation_params(self) -> Dict[str, Any]:
openai_params = {
"engine": self.deployment_name,
"api_type": self.openai_api_type,
"api_version": self.openai_api_version,
}
return {**openai_params, **super()._invocation_params}
@property
def _llm_type(self) -> str:
"""Return type of llm."""
return "azure"
class OpenAIChat(BaseLLM):
"""Wrapper around OpenAI Chat large language models.
To use, you should have the ``openai`` python package installed, and the
environment variable ``OPENAI_API_KEY`` set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example:
.. code-block:: python
from langchain.llms import OpenAIChat
openaichat = OpenAIChat(model_name="gpt-3.5-turbo")
"""
client: Any
model_name: str = "gpt-3.5-turbo"
"""Model name to use.""" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | model_kwargs: Dict[str, Any] = Field(default_factory=dict)
"""Holds any model parameters valid for `create` call not explicitly specified."""
openai_api_key: Optional[str] = None
openai_api_base: Optional[str] = None
openai_proxy: Optional[str] = None
max_retries: int = 6
"""Maximum number of retries to make when generating."""
prefix_messages: List = Field(default_factory=list)
"""Series of messages for Chat input."""
streaming: bool = False
"""Whether to stream the results or not."""
allowed_special: Union[Literal["all"], AbstractSet[str]] = set()
"""Set of special tokens that are allowed。"""
disallowed_special: Union[Literal["all"], Collection[str]] = "all"
"""Set of special tokens that are not allowed。"""
@root_validator(pre=True)
def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:
"""Build extra kwargs from additional params that were passed in."""
all_required_field_names = {field.alias for field in cls.__fields__.values()}
extra = values.get("model_kwargs", {})
for field_name in list(values):
if field_name not in all_required_field_names:
if field_name in extra:
raise ValueError(f"Found {field_name} supplied twice.")
extra[field_name] = values.pop(field_name)
values["model_kwargs"] = extra
return values
@root_validator()
def validate_environment(cls, values: Dict) -> Dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | """Validate that api key and python package exists in environment."""
openai_api_key = get_from_dict_or_env(
values, "openai_api_key", "OPENAI_API_KEY"
)
openai_api_base = get_from_dict_or_env(
values,
"openai_api_base",
"OPENAI_API_BASE",
default="",
)
openai_proxy = get_from_dict_or_env(
values,
"openai_proxy",
"OPENAI_PROXY",
default="",
)
openai_organization = get_from_dict_or_env(
values, "openai_organization", "OPENAI_ORGANIZATION", default=""
)
try:
import openai
openai.api_key = openai_api_key |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | if openai_api_base:
openai.api_base = openai_api_base
if openai_organization:
openai.organization = openai_organization
if openai_proxy:
openai.proxy = {"http": openai_proxy, "https": openai_proxy} # ty
except ImportError:
raise ImportError(
"Could not import openai python package. "
"Please install it with `pip install openai`."
)
try:
values["client"] = openai.ChatCompletion
except AttributeError:
raise ValueError(
"`openai` has no `ChatCompletion` attribute, this is likely "
"due to an old version of the openai package. Try upgrading it "
"with `pip install --upgrade openai`."
)
warnings.warn(
"You are trying to use a chat model. This way of initializing it is "
"no longer supported. Instead, please use: "
"`from langchain.chat_models import ChatOpenAI`"
)
return values
@property
def _default_params(self) -> Dict[str, Any]:
"""Get the default parameters for calling OpenAI API."""
return self.model_kwargs
def _get_chat_params( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | self, prompts: List[str], stop: Optional[List[str]] = None
) -> Tuple:
if len(prompts) > 1:
raise ValueError(
f"OpenAIChat currently only supports single prompt, got {prompts}"
)
messages = self.prefix_messages + [{"role": "user", "content": prompts[0]}]
params: Dict[str, Any] = {**{"model": self.model_name}, **self._default_params}
if stop is not None:
if "stop" in params:
raise ValueError("`stop` found in both the input and default params.")
params["stop"] = stop
if params.get("max_tokens") == -1:
# for Ch
del params["max_tokens"]
return messages, params
def _generate(
self,
prompts: List[str],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> LLMResult:
messages, params = self._get_chat_params(prompts, stop)
params = {**params, **kwargs}
if self.streaming:
response = ""
params["stream"] = True
for stream_resp in completion_with_retry(self, messages=messages, **params):
token = stream_resp["choices"][0]["delta"].get("content", "") |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | response += token
if run_manager:
run_manager.on_llm_new_token(
token,
)
return LLMResult(
generations=[[Generation(text=response)]],
)
else:
full_response = completion_with_retry(self, messages=messages, **params)
llm_output = {
"token_usage": full_response["usage"],
"model_name": self.model_name,
}
return LLMResult(
generations=[
[Generation(text=full_response["choices"][0]["message"]["content"])]
],
llm_output=llm_output,
)
async def _agenerate(
self,
prompts: List[str],
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> LLMResult:
messages, params = self._get_chat_params(prompts, stop)
params = {**params, **kwargs}
if self.streaming: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | response = ""
params["stream"] = True
async for stream_resp in await acompletion_with_retry(
self, messages=messages, **params
):
token = stream_resp["choices"][0]["delta"].get("content", "")
response += token
if run_manager:
await run_manager.on_llm_new_token(
token,
)
return LLMResult(
generations=[[Generation(text=response)]],
)
else:
full_response = await acompletion_with_retry(
self, messages=messages, **params
)
llm_output = {
"token_usage": full_response["usage"],
"model_name": self.model_name,
}
return LLMResult(
generations=[
[Generation(text=full_response["choices"][0]["message"]["content"])]
],
llm_output=llm_output,
)
@property
def _identifying_params(self) -> Mapping[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 6,118 | Issue: Update OpenAI model token mapping to reflect new API update 2023-06-13 | ### Issue you'd like to raise.
The blog post here
https://openai.com/blog/function-calling-and-other-api-updates
specifies
> - new 16k context version of gpt-3.5-turbo (vs the standard 4k version)
The `langchain/llms/openai.py` `model_token_mapping` should be changed to reflect this.
### Suggestion:
Add `gpt-3.5-turbo-16k` property to `model_token_mapping` with value 16k | https://github.com/langchain-ai/langchain/issues/6118 | https://github.com/langchain-ai/langchain/pull/6122 | 5d149e4d50325d2821263e59bac667f781c48f7a | e0f468f6c1f7f07bb3987f0887d53ce9af92bb29 | "2023-06-13T21:22:21Z" | python | "2023-06-21T08:37:16Z" | langchain/llms/openai.py | """Get the identifying parameters."""
return {**{"model_name": self.model_name}, **self._default_params}
@property
def _llm_type(self) -> str:
"""Return type of llm."""
return "openai-chat"
def get_token_ids(self, text: str) -> List[int]:
"""Get the token IDs using the tiktoken package."""
# ti
if sys.version_info[1] < 8:
return super().get_token_ids(text)
try:
import tiktoken
except ImportError:
raise ImportError(
"Could not import tiktoken python package. "
"This is needed in order to calculate get_num_tokens. "
"Please install it with `pip install tiktoken`."
)
enc = tiktoken.encoding_for_model(self.model_name)
return enc.encode(
text,
allowed_special=self.allowed_special,
disallowed_special=self.disallowed_special,
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | langchain/tools/base.py | """Base implementation for tools or skills."""
from __future__ import annotations
import warnings
from abc import ABC, abstractmethod
from inspect import signature
from typing import Any, Awaitable, Callable, Dict, Optional, Tuple, Type, Union
from pydantic import (
BaseModel,
Extra,
Field,
create_model,
root_validator,
validate_arguments,
)
from pydantic.main import ModelMetaclass
from langchain.callbacks.base import BaseCallbackManager
from langchain.callbacks.manager import (
AsyncCallbackManager,
AsyncCallbackManagerForToolRun,
CallbackManager,
CallbackManagerForToolRun,
Callbacks,
)
class SchemaAnnotationError(TypeError):
"""Raised when 'args_schema' is missing or has an incorrect type annotation."""
class ToolMetaclass(ModelMetaclass): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | langchain/tools/base.py | """Metaclass for BaseTool to ensure the provided args_schema
doesn't silently ignored."""
def __new__(
cls: Type[ToolMetaclass], name: str, bases: Tuple[Type, ...], dct: dict
) -> ToolMetaclass:
"""Create the definition of the new tool class."""
schema_type: Optional[Type[BaseModel]] = dct.get("args_schema")
if schema_type is not None:
schema_annotations = dct.get("__annotations__", {})
args_schema_type = schema_annotations.get("args_schema", None)
if args_schema_type is None or args_schema_type == BaseModel:
typehint_mandate = """
class ChildTool(BaseTool):
...
args_schema: Type[BaseModel] = SchemaClass
..."""
raise SchemaAnnotationError(
f"Tool definition for {name} must include valid type annotations"
f" for argument 'args_schema' to behave as expected.\n"
f"Expected annotation of 'Type[BaseModel]'"
f" but got '{args_schema_type}'.\n"
f"Expected class looks like:\n"
f"{typehint_mandate}"
)
return super().__new__(cls, name, bases, dct)
def _create_subset_model( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | langchain/tools/base.py | name: str, model: BaseModel, field_names: list
) -> Type[BaseModel]:
"""Create a pydantic model with only a subset of model's fields."""
fields = {}
for field_name in field_names:
field = model.__fields__[field_name]
fields[field_name] = (field.type_, field.field_info)
return create_model(name, **fields)
def _get_filtered_args(
inferred_model: Type[BaseModel],
func: Callable,
) -> dict:
"""Get the arguments from a function's signature."""
schema = inferred_model.schema()["properties"]
valid_keys = signature(func).parameters
return {k: schema[k] for k in valid_keys if k != "run_manager"}
class _SchemaConfig:
"""Configuration for the pydantic model."""
extra = Extra.forbid
arbitrary_types_allowed = True
def create_schema_from_function( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | langchain/tools/base.py | model_name: str,
func: Callable,
) -> Type[BaseModel]:
"""Create a pydantic schema from a function's signature.
Args:
model_name: Name to assign to the generated pydandic schema
func: Function to generate the schema from
Returns:
A pydantic model with the same arguments as the function
"""
validated = validate_arguments(func, config=_SchemaConfig)
inferred_model = validated.model
if "run_manager" in inferred_model.__fields__:
del inferred_model.__fields__["run_manager"]
valid_properties = _get_filtered_args(inferred_model, func)
return _create_subset_model(
f"{model_name}Schema", inferred_model, list(valid_properties)
)
class ToolException(Exception):
"""An optional exception that tool throws when execution error occurs.
When this exception is thrown, the agent will not stop working,
but will handle the exception according to the handle_tool_error
variable of the tool, and the processing result will be returned
to the agent as observation, and printed in red on the console.
"""
pass
class BaseTool(ABC, BaseModel, metaclass=ToolMetaclass): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | langchain/tools/base.py | """Interface LangChain tools must implement."""
name: str
"""The unique name of the tool that clearly communicates its purpose."""
description: str
"""Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
"""
args_schema: Optional[Type[BaseModel]] = None
"""Pydantic model class to validate and parse the tool's input arguments."""
return_direct: bool = False
"""Whether to return the tool's output directly. Setting this to True means
that after the tool is called, the AgentExecutor will stop looping.
"""
verbose: bool = False
"""Whether to log the tool's progress."""
callbacks: Callbacks = Field(default=None, exclude=True)
"""Callbacks to be called during tool execution."""
callback_manager: Optional[BaseCallbackManager] = Field(default=None, exclude=True)
"""Deprecated. Please use callbacks instead."""
handle_tool_error: Optional[
Union[bool, str, Callable[[ToolException], str]]
] = False
"""Handle the content of the ToolException thrown."""
class Config: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | langchain/tools/base.py | """Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
@property
def is_single_input(self) -> bool:
"""Whether the tool only accepts a single input."""
keys = {k for k in self.args if k != "kwargs"}
return len(keys) == 1
@property
def args(self) -> dict:
if self.args_schema is not None:
return self.args_schema.schema()["properties"]
else:
schema = create_schema_from_function(self.name, self._run)
return schema.schema()["properties"]
def _parse_input( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | langchain/tools/base.py | self,
tool_input: Union[str, Dict],
) -> Union[str, Dict[str, Any]]:
"""Convert tool input to pydantic model."""
input_args = self.args_schema
if isinstance(tool_input, str):
if input_args is not None:
key_ = next(iter(input_args.__fields__.keys()))
input_args.validate({key_: tool_input})
return tool_input
else:
if input_args is not None:
result = input_args.parse_obj(tool_input)
return {k: v for k, v in result.dict().items() if k in tool_input}
return tool_input
@root_validator()
def raise_deprecation(cls, values: Dict) -> Dict:
"""Raise deprecation warning if callback_manager is used."""
if values.get("callback_manager") is not None:
warnings.warn(
"callback_manager is deprecated. Please use callbacks instead.",
DeprecationWarning,
)
values["callbacks"] = values.pop("callback_manager", None)
return values
@abstractmethod
def _run( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | langchain/tools/base.py | self,
*args: Any,
**kwargs: Any,
) -> Any:
"""Use the tool.
Add run_manager: Optional[CallbackManagerForToolRun] = None
to child implementations to enable tracing,
"""
@abstractmethod
async def _arun(
self,
*args: Any,
**kwargs: Any,
) -> Any:
"""Use the tool asynchronously.
Add run_manager: Optional[AsyncCallbackManagerForToolRun] = None
to child implementations to enable tracing,
"""
def _to_args_and_kwargs(self, tool_input: Union[str, Dict]) -> Tuple[Tuple, Dict]:
if isinstance(tool_input, str):
return (tool_input,), {}
else:
return (), tool_input
def run( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | langchain/tools/base.py | self,
tool_input: Union[str, Dict],
verbose: Optional[bool] = None,
start_color: Optional[str] = "green",
color: Optional[str] = "green",
callbacks: Callbacks = None,
**kwargs: Any,
) -> Any:
"""Run the tool."""
parsed_input = self._parse_input(tool_input)
if not self.verbose and verbose is not None:
verbose_ = verbose
else:
verbose_ = self.verbose
callback_manager = CallbackManager.configure(
callbacks, self.callbacks, verbose=verbose_
)
new_arg_supported = signature(self._run).parameters.get("run_manager")
run_manager = callback_manager.on_tool_start(
{"name": self.name, "description": self.description},
tool_input if isinstance(tool_input, str) else str(tool_input),
color=start_color,
**kwargs, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | langchain/tools/base.py | )
try:
tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
observation = (
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
if new_arg_supported
else self._run(*tool_args, **tool_kwargs)
)
except ToolException as e:
if not self.handle_tool_error:
run_manager.on_tool_error(e)
raise e
elif isinstance(self.handle_tool_error, bool):
if e.args:
observation = e.args[0]
else:
observation = "Tool execution error"
elif isinstance(self.handle_tool_error, str):
observation = self.handle_tool_error
elif callable(self.handle_tool_error):
observation = self.handle_tool_error(e)
else:
raise ValueError(
f"Got unexpected type of `handle_tool_error`. Expected bool, str "
f"or callable. Received: {self.handle_tool_error}"
)
run_manager.on_tool_end(
str(observation), color="red", name=self.name, **kwargs
)
return observation |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | langchain/tools/base.py | except (Exception, KeyboardInterrupt) as e:
run_manager.on_tool_error(e)
raise e
else:
run_manager.on_tool_end(
str(observation), color=color, name=self.name, **kwargs
)
return observation
async def arun(
self,
tool_input: Union[str, Dict],
verbose: Optional[bool] = None,
start_color: Optional[str] = "green",
color: Optional[str] = "green",
callbacks: Callbacks = None,
**kwargs: Any,
) -> Any:
"""Run the tool asynchronously."""
parsed_input = self._parse_input(tool_input)
if not self.verbose and verbose is not None:
verbose_ = verbose
else:
verbose_ = self.verbose
callback_manager = AsyncCallbackManager.configure(
callbacks, self.callbacks, verbose=verbose_
)
new_arg_supported = signature(self._arun).parameters.get("run_manager")
run_manager = await callback_manager.on_tool_start(
{"name": self.name, "description": self.description},
tool_input if isinstance(tool_input, str) else str(tool_input), |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | langchain/tools/base.py | color=start_color,
**kwargs,
)
try:
tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
observation = (
await self._arun(*tool_args, run_manager=run_manager, **tool_kwargs)
if new_arg_supported
else await self._arun(*tool_args, **tool_kwargs)
)
except ToolException as e:
if not self.handle_tool_error:
await run_manager.on_tool_error(e)
raise e
elif isinstance(self.handle_tool_error, bool):
if e.args:
observation = e.args[0]
else:
observation = "Tool execution error"
elif isinstance(self.handle_tool_error, str):
observation = self.handle_tool_error
elif callable(self.handle_tool_error):
observation = self.handle_tool_error(e)
else:
raise ValueError(
f"Got unexpected type of `handle_tool_error`. Expected bool, str "
f"or callable. Received: {self.handle_tool_error}"
)
await run_manager.on_tool_end( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | langchain/tools/base.py | str(observation), color="red", name=self.name, **kwargs
)
return observation
except (Exception, KeyboardInterrupt) as e:
await run_manager.on_tool_error(e)
raise e
else:
await run_manager.on_tool_end(
str(observation), color=color, name=self.name, **kwargs
)
return observation
def __call__(self, tool_input: str, callbacks: Callbacks = None) -> str:
"""Make tool callable."""
return self.run(tool_input, callbacks=callbacks)
class Tool(BaseTool):
"""Tool that takes in function or coroutine directly."""
description: str = ""
func: Callable[..., str]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[str]]] = None
"""The asynchronous version of the function."""
@property
def args(self) -> dict:
"""The tool's input arguments."""
if self.args_schema is not None:
return self.args_schema.schema()["properties"]
return {"tool_input": {"type": "string"}}
def _to_args_and_kwargs(self, tool_input: Union[str, Dict]) -> Tuple[Tuple, Dict]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | langchain/tools/base.py | """Convert tool input to pydantic model."""
args, kwargs = super()._to_args_and_kwargs(tool_input)
all_args = list(args) + list(kwargs.values())
if len(all_args) != 1:
raise ToolException(
f"Too many arguments to single-input tool {self.name}."
f" Args: {all_args}"
)
return tuple(all_args), {}
def _run(
self,
*args: Any,
run_manager: Optional[CallbackManagerForToolRun] = None,
**kwargs: Any,
) -> Any:
"""Use the tool."""
new_argument_supported = signature(self.func).parameters.get("callbacks")
return (
self.func(
*args,
callbacks=run_manager.get_child() if run_manager else None,
**kwargs,
)
if new_argument_supported
else self.func(*args, **kwargs)
)
async def _arun( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | langchain/tools/base.py | self,
*args: Any,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
**kwargs: Any,
) -> Any:
"""Use the tool asynchronously."""
if self.coroutine:
new_argument_supported = signature(self.coroutine).parameters.get(
"callbacks"
)
return (
await self.coroutine(
*args,
callbacks=run_manager.get_child() if run_manager else None,
**kwargs,
)
if new_argument_supported
else await self.coroutine(*args, **kwargs)
)
raise NotImplementedError("Tool does not support async")
def __init__( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | langchain/tools/base.py | self, name: str, func: Callable, description: str, **kwargs: Any
) -> None:
"""Initialize tool."""
super(Tool, self).__init__(
name=name, func=func, description=description, **kwargs
)
@classmethod
def from_function(
cls,
func: Callable,
name: str,
description: str,
return_direct: bool = False,
args_schema: Optional[Type[BaseModel]] = None,
**kwargs: Any,
) -> Tool:
"""Initialize tool from a function."""
return cls(
name=name,
func=func,
description=description,
return_direct=return_direct,
args_schema=args_schema,
**kwargs,
)
class StructuredTool(BaseTool): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | langchain/tools/base.py | """Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Callable[..., Any]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
@property
def args(self) -> dict:
"""The tool's input arguments."""
return self.args_schema.schema()["properties"]
def _run(
self,
*args: Any,
run_manager: Optional[CallbackManagerForToolRun] = None,
**kwargs: Any,
) -> Any:
"""Use the tool."""
new_argument_supported = signature(self.func).parameters.get("callbacks")
return (
self.func(
*args,
callbacks=run_manager.get_child() if run_manager else None,
**kwargs,
)
if new_argument_supported
else self.func(*args, **kwargs)
)
async def _arun( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | langchain/tools/base.py | self,
*args: Any,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
**kwargs: Any,
) -> str:
"""Use the tool asynchronously."""
if self.coroutine:
new_argument_supported = signature(self.coroutine).parameters.get(
"callbacks"
)
return (
await self.coroutine(
*args,
callbacks=run_manager.get_child() if run_manager else None,
**kwargs,
)
if new_argument_supported
else await self.coroutine(*args, **kwargs)
)
raise NotImplementedError("Tool does not support async")
@classmethod
def from_function( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | langchain/tools/base.py | cls,
func: Callable,
name: Optional[str] = None,
description: Optional[str] = None,
return_direct: bool = False,
args_schema: Optional[Type[BaseModel]] = None,
infer_schema: bool = True,
**kwargs: Any,
) -> StructuredTool:
"""Create tool from a given function.
A classmethod that helps to create a tool from a function.
Args:
func: The function from which to create a tool
name: The name of the tool. Defaults to the function name
description: The description of the tool. Defaults to the function docstring
return_direct: Whether to return the result directly or as a callback
args_schema: The schema of the tool's input arguments
infer_schema: Whether to infer the schema from the function's signature
**kwargs: Additional arguments to pass to the tool |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | langchain/tools/base.py | Returns:
The tool
Examples:
... code-block:: python
def add(a: int, b: int) -> int:
\"\"\"Add two numbers\"\"\"
return a + b
tool = StructuredTool.from_function(add)
tool.run(1, 2) # 3
"""
name = name or func.__name__
description = description or func.__doc__
assert (
description is not None
), "Function must have a docstring if description not provided."
description = f"{name}{signature(func)} - {description.strip()}"
_args_schema = args_schema
if _args_schema is None and infer_schema:
_args_schema = create_schema_from_function(f"{name}Schema", func)
return cls(
name=name,
func=func,
args_schema=_args_schema,
description=description,
return_direct=return_direct,
**kwargs,
)
def tool( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | langchain/tools/base.py | *args: Union[str, Callable],
return_direct: bool = False,
args_schema: Optional[Type[BaseModel]] = None,
infer_schema: bool = True,
) -> Callable:
"""Make tools out of functions, can be used with or without arguments.
Args:
*args: The arguments to the tool.
return_direct: Whether to return directly from the tool rather
than continuing the agent loop.
args_schema: optional argument schema for user to specify
infer_schema: Whether to infer the schema of the arguments from
the function's signature. This also makes the resultant tool
accept a dictionary input to its `run()` function.
Requires:
- Function must be of type (str) -> str
- Function must have a docstring
Examples:
.. code-block:: python
@tool
def search_api(query: str) -> str:
# Searches the API for the query.
return
@tool("search", return_direct=True)
def search_api(query: str) -> str:
# Searches the API for the query.
return
"""
def _make_with_name(tool_name: str) -> Callable:
def _make_tool(func: Callable) -> BaseTool: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | langchain/tools/base.py | if infer_schema or args_schema is not None:
return StructuredTool.from_function(
func,
name=tool_name,
return_direct=return_direct,
args_schema=args_schema,
infer_schema=infer_schema,
)
assert func.__doc__ is not None, "Function must have a docstring"
return Tool(
name=tool_name,
func=func,
description=f"{tool_name} tool",
return_direct=return_direct,
)
return _make_tool
if len(args) == 1 and isinstance(args[0], str):
return _make_with_name(args[0])
elif len(args) == 1 and callable(args[0]):
return _make_with_name(args[0].__name__)(args[0])
elif len(args) == 0:
def _partial(func: Callable[[str], str]) -> BaseTool: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | langchain/tools/base.py | return _make_with_name(func.__name__)(func)
return _partial
else:
raise ValueError("Too many arguments for tool decorator") |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | tests/unit_tests/tools/test_base.py | """Test the base tool implementation."""
import json
from datetime import datetime
from enum import Enum
from functools import partial
from typing import Any, Optional, Type, Union
import pytest
from pydantic import BaseModel
from langchain.agents.tools import Tool, tool
from langchain.callbacks.manager import (
AsyncCallbackManagerForToolRun,
CallbackManagerForToolRun,
)
from langchain.tools.base import (
BaseTool,
SchemaAnnotationError,
StructuredTool,
ToolException,
)
def test_unnamed_decorator() -> None:
"""Test functionality with unnamed decorator."""
@tool
def search_api(query: str) -> str:
"""Search the API for the query."""
return "API result"
assert isinstance(search_api, BaseTool)
assert search_api.name == "search_api"
assert not search_api.return_direct
assert search_api("test") == "API result"
class _MockSchema(BaseModel): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | tests/unit_tests/tools/test_base.py | arg1: int
arg2: bool
arg3: Optional[dict] = None
class _MockStructuredTool(BaseTool):
name = "structured_api"
args_schema: Type[BaseModel] = _MockSchema
description = "A Structured Tool"
def _run(self, arg1: int, arg2: bool, arg3: Optional[dict] = None) -> str:
return f"{arg1} {arg2} {arg3}"
async def _arun(self, arg1: int, arg2: bool, arg3: Optional[dict] = None) -> str:
raise NotImplementedError
def test_structured_args() -> None:
"""Test functionality with structured arguments."""
structured_api = _MockStructuredTool()
assert isinstance(structured_api, BaseTool)
assert structured_api.name == "structured_api"
expected_result = "1 True {'foo': 'bar'}"
args = {"arg1": 1, "arg2": True, "arg3": {"foo": "bar"}}
assert structured_api.run(args) == expected_result
def test_unannotated_base_tool_raises_error() -> None:
"""Test that a BaseTool without type hints raises an exception.""" ""
with pytest.raises(SchemaAnnotationError):
class _UnAnnotatedTool(BaseTool): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | tests/unit_tests/tools/test_base.py | name = "structured_api"
args_schema = _MockSchema
description = "A Structured Tool"
def _run(self, arg1: int, arg2: bool, arg3: Optional[dict] = None) -> str:
return f"{arg1} {arg2} {arg3}"
async def _arun(
self, arg1: int, arg2: bool, arg3: Optional[dict] = None
) -> str:
raise NotImplementedError
def test_misannotated_base_tool_raises_error() -> None:
"""Test that a BaseTool with the incorrrect typehint raises an exception.""" ""
with pytest.raises(SchemaAnnotationError):
class _MisAnnotatedTool(BaseTool):
name = "structured_api"
args_schema: BaseModel = _MockSchema
description = "A Structured Tool"
def _run(self, arg1: int, arg2: bool, arg3: Optional[dict] = None) -> str:
return f"{arg1} {arg2} {arg3}"
async def _arun(
self, arg1: int, arg2: bool, arg3: Optional[dict] = None
) -> str:
raise NotImplementedError
def test_forward_ref_annotated_base_tool_accepted() -> None:
"""Test that a using forward ref annotation syntax is accepted.""" ""
class _ForwardRefAnnotatedTool(BaseTool): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | tests/unit_tests/tools/test_base.py | name = "structured_api"
args_schema: "Type[BaseModel]" = _MockSchema
description = "A Structured Tool"
def _run(self, arg1: int, arg2: bool, arg3: Optional[dict] = None) -> str:
return f"{arg1} {arg2} {arg3}"
async def _arun(
self, arg1: int, arg2: bool, arg3: Optional[dict] = None
) -> str:
raise NotImplementedError
def test_subclass_annotated_base_tool_accepted() -> None:
"""Test BaseTool child w/ custom schema isn't overwritten."""
class _ForwardRefAnnotatedTool(BaseTool):
name = "structured_api"
args_schema: Type[_MockSchema] = _MockSchema
description = "A Structured Tool"
def _run(self, arg1: int, arg2: bool, arg3: Optional[dict] = None) -> str:
return f"{arg1} {arg2} {arg3}"
async def _arun(
self, arg1: int, arg2: bool, arg3: Optional[dict] = None
) -> str:
raise NotImplementedError
assert issubclass(_ForwardRefAnnotatedTool, BaseTool)
tool = _ForwardRefAnnotatedTool()
assert tool.args_schema == _MockSchema
def test_decorator_with_specified_schema() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | tests/unit_tests/tools/test_base.py | """Test that manually specified schemata are passed through to the tool."""
@tool(args_schema=_MockSchema)
def tool_func(arg1: int, arg2: bool, arg3: Optional[dict] = None) -> str:
"""Return the arguments directly."""
return f"{arg1} {arg2} {arg3}"
assert isinstance(tool_func, BaseTool)
assert tool_func.args_schema == _MockSchema
def test_decorated_function_schema_equivalent() -> None:
"""Test that a BaseTool without a schema meets expectations."""
@tool
def structured_tool_input(
arg1: int, arg2: bool, arg3: Optional[dict] = None
) -> str:
"""Return the arguments directly."""
return f"{arg1} {arg2} {arg3}"
assert isinstance(structured_tool_input, BaseTool)
assert structured_tool_input.args_schema is not None
assert (
structured_tool_input.args_schema.schema()["properties"]
== _MockSchema.schema()["properties"]
== structured_tool_input.args
)
def test_args_kwargs_filtered() -> None:
class _SingleArgToolWithKwargs(BaseTool): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,456 | Tools: Inconsistent callbacks/run_manager parameter | ### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the CallbackManager to save some info within a tool. So, as per the [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) that is used to create the tool schema, I define the function as:
```python
def get_list_of_products(
self, profile_description: str, run_manager: CallbackManagerForToolRun
):
```
Nonetheless, once the tool is run the[ expected parameter](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L493) in the function's signature is `callbacks`,
```python
new_argument_supported = signature(self.func).parameters.get("callbacks")
```
So the tool can't run, with the error being:
```bash
TypeError: get_list_of_products() missing 1 required positional argument: 'run_manager'
```
This behavior applies to Structured tool and Tool.
### Expected behavior
Either the expected function parameter is set to `run_manager` to replicate the behavior of the [`run` function](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L256) from the `BaseTool` or a different function is used instead of [`create_schema_from_function`](https://github.com/hwchase17/langchain/blob/64b4165c8d9b8374295d4629ef57d4d58e9af7c8/langchain/tools/base.py#L99) to create a tool's schema expecting the `callbacks` parameter. | https://github.com/langchain-ai/langchain/issues/5456 | https://github.com/langchain-ai/langchain/pull/6483 | b4fe7f3a0995cc6a0111a7e71347eddf2d61f132 | 980c8651743b653f994ad6b97a27b0fa31ee92b4 | "2023-05-30T17:09:02Z" | python | "2023-06-23T08:48:27Z" | tests/unit_tests/tools/test_base.py | name = "single_arg_tool"
description = "A single arged tool with kwargs"
def _run(
self,
some_arg: str,
run_manager: Optional[CallbackManagerForToolRun] = None,
**kwargs: Any,
) -> str:
return "foo"
async def _arun(
self,
some_arg: str,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
**kwargs: Any,
) -> str:
raise NotImplementedError
tool = _SingleArgToolWithKwargs()
assert tool.is_single_input
class _VarArgToolWithKwargs(BaseTool): |
Subsets and Splits