status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
233
| body
stringlengths 0
186k
⌀ | issue_url
stringlengths 38
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown | updated_file
stringlengths 7
188
| chunk_content
stringlengths 1
1.03M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,307 | ImportError: cannot import name 'ApifyWrapper' from 'langchain.utilities' | ### System Info
Hi All,
I tried to run Apify tutorial and I ran on the issue of ImportError: cannot import name 'ApifyWrapper' from 'langchain.utilities'. I checked the Utilities library under utilities/__init__.py and I couldn't find anything under the Generic integrations with third-party systems and packages.
Any thoughts or support?
### Who can help?
@hwchase17, @agola
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import os
openai.api_key = os.environ["OPEN_API_KEY"]
os.environ["APIFY_API_TOKEN"] = "apify_api_qNa00bcYGUYFwIZltWiOuhskmer7E61VE6GN"
apify = ApifyWrapper()
loader = apify.call_actor(
actor_id="apify/website-content-crawler",
run_input={"startUrls": [{"url": "https://python.langchain.com/en/latest/"}]},
dataset_mapping_function=lambda item: Document(
page_content=item["text"] or "", metadata={"source": item["url"]}
),
)
index = VectorstoreIndexCreator().from_loaders([loader])
query = "What is LangChain?"
result = index.query_with_sources(query)
print(result["answer"])
print(result["sources"])
### Expected behavior
LangChain is a standard interface through which you can interact with a variety of large language models (LLMs). It provides modules that can be used to build language model applications, and it also provides chains and agents with memory capabilities.
https://python.langchain.com/en/latest/modules/models/llms.html, https://python.langchain.com/en/latest/getting_started/getting_started.html | https://github.com/langchain-ai/langchain/issues/8307 | https://github.com/langchain-ai/langchain/pull/10067 | 02e51f4217207eed4fc9ac89735cf1f660be3f10 | 86646ec555970e01130994dc75f3a0c5d4e52de9 | "2023-07-26T18:18:22Z" | python | "2023-08-31T22:47:44Z" | libs/langchain/langchain/utilities/__init__.py | "GoldenQueryAPIWrapper",
"GooglePlacesAPIWrapper",
"GoogleSearchAPIWrapper",
"GoogleSerperAPIWrapper",
"GraphQLAPIWrapper",
"JiraAPIWrapper",
"LambdaWrapper",
"MaxComputeAPIWrapper",
"MetaphorSearchAPIWrapper",
"OpenWeatherMapAPIWrapper",
"Portkey",
"PowerBIDataset",
"PubMedAPIWrapper",
"PythonREPL",
"Requests",
"RequestsWrapper",
"SQLDatabase",
"SceneXplainAPIWrapper",
"SearxSearchWrapper",
"SerpAPIWrapper",
"SparkSQL",
"TensorflowDatasets",
"TextRequestsWrapper",
"TextRequestsWrapper",
"TwilioAPIWrapper",
"WikipediaAPIWrapper",
"WolframAlphaAPIWrapper",
"ZapierNLAWrapper",
] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,201 | DOC: Apify integration missing | ### Issue with current documentation:
The Apify integration has been delete by @hwchase17 in commit aa0e69bc98fa9c77b01e5104f12b2b779f64fd33 and thus this documentation is not valid anymore:
https://python.langchain.com/docs/integrations/tools/apify
### Idea or request for content:
It would be highly beneficial to have information on a suitable replacement for the Apify integration. | https://github.com/langchain-ai/langchain/issues/8201 | https://github.com/langchain-ai/langchain/pull/10067 | 02e51f4217207eed4fc9ac89735cf1f660be3f10 | 86646ec555970e01130994dc75f3a0c5d4e52de9 | "2023-07-24T19:46:13Z" | python | "2023-08-31T22:47:44Z" | libs/langchain/langchain/utilities/__init__.py | """**Utilities** are the integrations with third-part systems and packages.
Other LangChain classes use **Utilities** to interact with third-part systems
and packages.
"""
from langchain.utilities.alpha_vantage import AlphaVantageAPIWrapper
from langchain.utilities.arxiv import ArxivAPIWrapper
from langchain.utilities.awslambda import LambdaWrapper
from langchain.utilities.bash import BashProcess
from langchain.utilities.bibtex import BibtexparserWrapper
from langchain.utilities.bing_search import BingSearchAPIWrapper
from langchain.utilities.brave_search import BraveSearchWrapper
from langchain.utilities.duckduckgo_search import DuckDuckGoSearchAPIWrapper
from langchain.utilities.golden_query import GoldenQueryAPIWrapper
from langchain.utilities.google_places_api import GooglePlacesAPIWrapper |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,201 | DOC: Apify integration missing | ### Issue with current documentation:
The Apify integration has been delete by @hwchase17 in commit aa0e69bc98fa9c77b01e5104f12b2b779f64fd33 and thus this documentation is not valid anymore:
https://python.langchain.com/docs/integrations/tools/apify
### Idea or request for content:
It would be highly beneficial to have information on a suitable replacement for the Apify integration. | https://github.com/langchain-ai/langchain/issues/8201 | https://github.com/langchain-ai/langchain/pull/10067 | 02e51f4217207eed4fc9ac89735cf1f660be3f10 | 86646ec555970e01130994dc75f3a0c5d4e52de9 | "2023-07-24T19:46:13Z" | python | "2023-08-31T22:47:44Z" | libs/langchain/langchain/utilities/__init__.py | from langchain.utilities.google_search import GoogleSearchAPIWrapper
from langchain.utilities.google_serper import GoogleSerperAPIWrapper
from langchain.utilities.graphql import GraphQLAPIWrapper
from langchain.utilities.jira import JiraAPIWrapper
from langchain.utilities.max_compute import MaxComputeAPIWrapper
from langchain.utilities.metaphor_search import MetaphorSearchAPIWrapper
from langchain.utilities.openweathermap import OpenWeatherMapAPIWrapper
from langchain.utilities.portkey import Portkey
from langchain.utilities.powerbi import PowerBIDataset
from langchain.utilities.pubmed import PubMedAPIWrapper
from langchain.utilities.python import PythonREPL
from langchain.utilities.requests import Requests, RequestsWrapper, TextRequestsWrapper
from langchain.utilities.scenexplain import SceneXplainAPIWrapper
from langchain.utilities.searx_search import SearxSearchWrapper
from langchain.utilities.serpapi import SerpAPIWrapper
from langchain.utilities.spark_sql import SparkSQL
from langchain.utilities.sql_database import SQLDatabase
from langchain.utilities.tensorflow_datasets import TensorflowDatasets
from langchain.utilities.twilio import TwilioAPIWrapper
from langchain.utilities.wikipedia import WikipediaAPIWrapper
from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper
from langchain.utilities.zapier import ZapierNLAWrapper
__all__ = [
"AlphaVantageAPIWrapper",
"ArxivAPIWrapper",
"BashProcess",
"BibtexparserWrapper",
"BingSearchAPIWrapper",
"BraveSearchWrapper",
"DuckDuckGoSearchAPIWrapper", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,201 | DOC: Apify integration missing | ### Issue with current documentation:
The Apify integration has been delete by @hwchase17 in commit aa0e69bc98fa9c77b01e5104f12b2b779f64fd33 and thus this documentation is not valid anymore:
https://python.langchain.com/docs/integrations/tools/apify
### Idea or request for content:
It would be highly beneficial to have information on a suitable replacement for the Apify integration. | https://github.com/langchain-ai/langchain/issues/8201 | https://github.com/langchain-ai/langchain/pull/10067 | 02e51f4217207eed4fc9ac89735cf1f660be3f10 | 86646ec555970e01130994dc75f3a0c5d4e52de9 | "2023-07-24T19:46:13Z" | python | "2023-08-31T22:47:44Z" | libs/langchain/langchain/utilities/__init__.py | "GoldenQueryAPIWrapper",
"GooglePlacesAPIWrapper",
"GoogleSearchAPIWrapper",
"GoogleSerperAPIWrapper",
"GraphQLAPIWrapper",
"JiraAPIWrapper",
"LambdaWrapper",
"MaxComputeAPIWrapper",
"MetaphorSearchAPIWrapper",
"OpenWeatherMapAPIWrapper",
"Portkey",
"PowerBIDataset",
"PubMedAPIWrapper",
"PythonREPL",
"Requests",
"RequestsWrapper",
"SQLDatabase",
"SceneXplainAPIWrapper",
"SearxSearchWrapper",
"SerpAPIWrapper",
"SparkSQL",
"TensorflowDatasets",
"TextRequestsWrapper",
"TextRequestsWrapper",
"TwilioAPIWrapper",
"WikipediaAPIWrapper",
"WolframAlphaAPIWrapper",
"ZapierNLAWrapper",
] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,685 | TimeWeightedVectorStoreRetriever (TWVSR) and ChromaDb vector store - base.py | Hi there.
Realise there is a lot happening and this this looks to be something that has been missed. When trying to use TWVSR with ChromaDb it errors because of the lack of implementation of the following, namely __similarity_search_with_relevance_scores, inside of base.py
`def _similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs and relevance scores, normalized on a scale from 0 to 1.
0 is dissimilar, 1 is most similar.
"""
raise NotImplementedError`
Trying to make a work-around now
Many thanks
Ian | https://github.com/langchain-ai/langchain/issues/3685 | https://github.com/langchain-ai/langchain/pull/9906 | bc8cceebf7b2d8e056b905926a6009367b6a8b14 | 4dc47bd3acc8928359773fc3fb80d289b9eae55e | "2023-04-28T00:16:05Z" | python | "2023-09-03T22:05:30Z" | libs/langchain/langchain/retrievers/time_weighted_retriever.py | import datetime
from copy import deepcopy
from typing import Any, Dict, List, Optional, Tuple
from langchain.callbacks.manager import CallbackManagerForRetrieverRun
from langchain.pydantic_v1 import Field
from langchain.schema import BaseRetriever, Document
from langchain.vectorstores.base import VectorStore
def _get_hours_passed(time: datetime.datetime, ref_time: datetime.datetime) -> float: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,685 | TimeWeightedVectorStoreRetriever (TWVSR) and ChromaDb vector store - base.py | Hi there.
Realise there is a lot happening and this this looks to be something that has been missed. When trying to use TWVSR with ChromaDb it errors because of the lack of implementation of the following, namely __similarity_search_with_relevance_scores, inside of base.py
`def _similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs and relevance scores, normalized on a scale from 0 to 1.
0 is dissimilar, 1 is most similar.
"""
raise NotImplementedError`
Trying to make a work-around now
Many thanks
Ian | https://github.com/langchain-ai/langchain/issues/3685 | https://github.com/langchain-ai/langchain/pull/9906 | bc8cceebf7b2d8e056b905926a6009367b6a8b14 | 4dc47bd3acc8928359773fc3fb80d289b9eae55e | "2023-04-28T00:16:05Z" | python | "2023-09-03T22:05:30Z" | libs/langchain/langchain/retrievers/time_weighted_retriever.py | """Get the hours passed between two datetimes."""
return (time - ref_time).total_seconds() / 3600
class TimeWeightedVectorStoreRetriever(BaseRetriever):
"""Retriever that combines embedding similarity with
recency in retrieving values."""
vectorstore: VectorStore
"""The vectorstore to store documents and determine salience."""
search_kwargs: dict = Field(default_factory=lambda: dict(k=100))
"""Keyword arguments to pass to the vectorstore similarity search."""
memory_stream: List[Document] = Field(default_factory=list)
"""The memory_stream of documents to search through."""
decay_rate: float = Field(default=0.01)
"""The exponential decay factor used as (1.0-decay_rate)**(hrs_passed)."""
k: int = 4
"""The maximum number of documents to retrieve in a given call."""
other_score_keys: List[str] = []
"""Other keys in the metadata to factor into the score, e.g. 'importance'."""
default_salience: Optional[float] = None
"""The salience to assign memories not retrieved from the vector store.
None assigns no salience to documents not fetched from the vector store.
"""
class Config: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,685 | TimeWeightedVectorStoreRetriever (TWVSR) and ChromaDb vector store - base.py | Hi there.
Realise there is a lot happening and this this looks to be something that has been missed. When trying to use TWVSR with ChromaDb it errors because of the lack of implementation of the following, namely __similarity_search_with_relevance_scores, inside of base.py
`def _similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs and relevance scores, normalized on a scale from 0 to 1.
0 is dissimilar, 1 is most similar.
"""
raise NotImplementedError`
Trying to make a work-around now
Many thanks
Ian | https://github.com/langchain-ai/langchain/issues/3685 | https://github.com/langchain-ai/langchain/pull/9906 | bc8cceebf7b2d8e056b905926a6009367b6a8b14 | 4dc47bd3acc8928359773fc3fb80d289b9eae55e | "2023-04-28T00:16:05Z" | python | "2023-09-03T22:05:30Z" | libs/langchain/langchain/retrievers/time_weighted_retriever.py | """Configuration for this pydantic object."""
arbitrary_types_allowed = True
def _get_combined_score(
self,
document: Document,
vector_relevance: Optional[float],
current_time: datetime.datetime,
) -> float:
"""Return the combined score for a document."""
hours_passed = _get_hours_passed(
current_time,
document.metadata["last_accessed_at"],
)
score = (1.0 - self.decay_rate) ** hours_passed
for key in self.other_score_keys:
if key in document.metadata:
score += document.metadata[key]
if vector_relevance is not None:
score += vector_relevance
return score
def get_salient_docs(self, query: str) -> Dict[int, Tuple[Document, float]]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,685 | TimeWeightedVectorStoreRetriever (TWVSR) and ChromaDb vector store - base.py | Hi there.
Realise there is a lot happening and this this looks to be something that has been missed. When trying to use TWVSR with ChromaDb it errors because of the lack of implementation of the following, namely __similarity_search_with_relevance_scores, inside of base.py
`def _similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs and relevance scores, normalized on a scale from 0 to 1.
0 is dissimilar, 1 is most similar.
"""
raise NotImplementedError`
Trying to make a work-around now
Many thanks
Ian | https://github.com/langchain-ai/langchain/issues/3685 | https://github.com/langchain-ai/langchain/pull/9906 | bc8cceebf7b2d8e056b905926a6009367b6a8b14 | 4dc47bd3acc8928359773fc3fb80d289b9eae55e | "2023-04-28T00:16:05Z" | python | "2023-09-03T22:05:30Z" | libs/langchain/langchain/retrievers/time_weighted_retriever.py | """Return documents that are salient to the query."""
docs_and_scores: List[Tuple[Document, float]]
docs_and_scores = self.vectorstore.similarity_search_with_relevance_scores(
query, **self.search_kwargs
)
results = {}
for fetched_doc, relevance in docs_and_scores:
if "buffer_idx" in fetched_doc.metadata:
buffer_idx = fetched_doc.metadata["buffer_idx"]
doc = self.memory_stream[buffer_idx]
results[buffer_idx] = (doc, relevance)
return results
def _get_relevant_documents(
self, query: str, *, run_manager: CallbackManagerForRetrieverRun
) -> List[Document]:
"""Return documents that are relevant to the query."""
current_time = datetime.datetime.now()
docs_and_scores = {
doc.metadata["buffer_idx"]: (doc, self.default_salience)
for doc in self.memory_stream[-self.k :]
} |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,685 | TimeWeightedVectorStoreRetriever (TWVSR) and ChromaDb vector store - base.py | Hi there.
Realise there is a lot happening and this this looks to be something that has been missed. When trying to use TWVSR with ChromaDb it errors because of the lack of implementation of the following, namely __similarity_search_with_relevance_scores, inside of base.py
`def _similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs and relevance scores, normalized on a scale from 0 to 1.
0 is dissimilar, 1 is most similar.
"""
raise NotImplementedError`
Trying to make a work-around now
Many thanks
Ian | https://github.com/langchain-ai/langchain/issues/3685 | https://github.com/langchain-ai/langchain/pull/9906 | bc8cceebf7b2d8e056b905926a6009367b6a8b14 | 4dc47bd3acc8928359773fc3fb80d289b9eae55e | "2023-04-28T00:16:05Z" | python | "2023-09-03T22:05:30Z" | libs/langchain/langchain/retrievers/time_weighted_retriever.py | docs_and_scores.update(self.get_salient_docs(query))
rescored_docs = [
(doc, self._get_combined_score(doc, relevance, current_time))
for doc, relevance in docs_and_scores.values()
]
rescored_docs.sort(key=lambda x: x[1], reverse=True)
result = []
for doc, _ in rescored_docs[: self.k]:
buffered_doc = self.memory_stream[doc.metadata["buffer_idx"]]
buffered_doc.metadata["last_accessed_at"] = current_time
result.append(buffered_doc)
return result
def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]:
"""Add documents to vectorstore."""
current_time = kwargs.get("current_time")
if current_time is None:
current_time = datetime.datetime.now()
dup_docs = [deepcopy(d) for d in documents]
for i, doc in enumerate(dup_docs):
if "last_accessed_at" not in doc.metadata:
doc.metadata["last_accessed_at"] = current_time
if "created_at" not in doc.metadata:
doc.metadata["created_at"] = current_time
doc.metadata["buffer_idx"] = len(self.memory_stream) + i
self.memory_stream.extend(dup_docs)
return self.vectorstore.add_documents(dup_docs, **kwargs)
async def aadd_documents( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,685 | TimeWeightedVectorStoreRetriever (TWVSR) and ChromaDb vector store - base.py | Hi there.
Realise there is a lot happening and this this looks to be something that has been missed. When trying to use TWVSR with ChromaDb it errors because of the lack of implementation of the following, namely __similarity_search_with_relevance_scores, inside of base.py
`def _similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs and relevance scores, normalized on a scale from 0 to 1.
0 is dissimilar, 1 is most similar.
"""
raise NotImplementedError`
Trying to make a work-around now
Many thanks
Ian | https://github.com/langchain-ai/langchain/issues/3685 | https://github.com/langchain-ai/langchain/pull/9906 | bc8cceebf7b2d8e056b905926a6009367b6a8b14 | 4dc47bd3acc8928359773fc3fb80d289b9eae55e | "2023-04-28T00:16:05Z" | python | "2023-09-03T22:05:30Z" | libs/langchain/langchain/retrievers/time_weighted_retriever.py | self, documents: List[Document], **kwargs: Any
) -> List[str]:
"""Add documents to vectorstore."""
current_time = kwargs.get("current_time")
if current_time is None:
current_time = datetime.datetime.now()
dup_docs = [deepcopy(d) for d in documents]
for i, doc in enumerate(dup_docs):
if "last_accessed_at" not in doc.metadata:
doc.metadata["last_accessed_at"] = current_time
if "created_at" not in doc.metadata:
doc.metadata["created_at"] = current_time
doc.metadata["buffer_idx"] = len(self.memory_stream) + i
self.memory_stream.extend(dup_docs)
return await self.vectorstore.aadd_documents(dup_docs, **kwargs) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,855 | Support index upsert parallelization for pinecone | ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | https://github.com/langchain-ai/langchain/issues/9855 | https://github.com/langchain-ai/langchain/pull/9859 | 16a27ab244e6b92d74c48c206e0e6f1b5d00e126 | 4765c097035b9ff722fa9bbb7c3dd4eb6aed933c | "2023-08-28T13:09:29Z" | python | "2023-09-03T22:37:41Z" | libs/langchain/langchain/vectorstores/pinecone.py | from __future__ import annotations
import logging
import uuid
import warnings
from typing import Any, Callable, Iterable, List, Optional, Tuple, Union
import numpy as np
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.vectorstores.base import VectorStore
from langchain.vectorstores.utils import DistanceStrategy, maximal_marginal_relevance
logger = logging.getLogger(__name__)
class Pinecone(VectorStore):
"""`Pinecone` vector store.
To use, you should have the ``pinecone-client`` python package installed.
Example:
.. code-block:: python
from langchain.vectorstores import Pinecone
from langchain.embeddings.openai import OpenAIEmbeddings
import pinecone
# The environment should be the one specified next to the API key
# in your Pinecone console
pinecone.init(api_key="***", environment="...")
index = pinecone.Index("langchain-demo")
embeddings = OpenAIEmbeddings()
vectorstore = Pinecone(index, embeddings.embed_query, "text")
"""
def __init__( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,855 | Support index upsert parallelization for pinecone | ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | https://github.com/langchain-ai/langchain/issues/9855 | https://github.com/langchain-ai/langchain/pull/9859 | 16a27ab244e6b92d74c48c206e0e6f1b5d00e126 | 4765c097035b9ff722fa9bbb7c3dd4eb6aed933c | "2023-08-28T13:09:29Z" | python | "2023-09-03T22:37:41Z" | libs/langchain/langchain/vectorstores/pinecone.py | self,
index: Any,
embedding: Union[Embeddings, Callable],
text_key: str,
namespace: Optional[str] = None,
distance_strategy: Optional[DistanceStrategy] = DistanceStrategy.COSINE,
):
"""Initialize with Pinecone client."""
try:
import pinecone
except ImportError:
raise ImportError( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,855 | Support index upsert parallelization for pinecone | ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | https://github.com/langchain-ai/langchain/issues/9855 | https://github.com/langchain-ai/langchain/pull/9859 | 16a27ab244e6b92d74c48c206e0e6f1b5d00e126 | 4765c097035b9ff722fa9bbb7c3dd4eb6aed933c | "2023-08-28T13:09:29Z" | python | "2023-09-03T22:37:41Z" | libs/langchain/langchain/vectorstores/pinecone.py | "Could not import pinecone python package. "
"Please install it with `pip install pinecone-client`."
)
if not isinstance(index, pinecone.index.Index):
raise ValueError(
f"client should be an instance of pinecone.index.Index, "
f"got {type(index)}"
)
if not isinstance(embedding, Embeddings):
warnings.warn(
"Passing in `embedding` as a Callable is deprecated. Please pass in an"
" Embeddings object instead."
)
self._index = index
self._embedding = embedding
self._text_key = text_key
self._namespace = namespace
self.distance_strategy = distance_strategy
@property
def embeddings(self) -> Optional[Embeddings]:
"""Access the query embedding object if available."""
if isinstance(self._embedding, Embeddings):
return self._embedding
return None
def _embed_documents(self, texts: Iterable[str]) -> List[List[float]]:
"""Embed search docs."""
if isinstance(self._embedding, Embeddings):
return self._embedding.embed_documents(list(texts))
return [self._embedding(t) for t in texts]
def _embed_query(self, text: str) -> List[float]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,855 | Support index upsert parallelization for pinecone | ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | https://github.com/langchain-ai/langchain/issues/9855 | https://github.com/langchain-ai/langchain/pull/9859 | 16a27ab244e6b92d74c48c206e0e6f1b5d00e126 | 4765c097035b9ff722fa9bbb7c3dd4eb6aed933c | "2023-08-28T13:09:29Z" | python | "2023-09-03T22:37:41Z" | libs/langchain/langchain/vectorstores/pinecone.py | """Embed query text."""
if isinstance(self._embedding, Embeddings):
return self._embedding.embed_query(text)
return self._embedding(text)
def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
namespace: Optional[str] = None,
batch_size: int = 32,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
ids: Optional list of ids to associate with the texts.
namespace: Optional pinecone namespace to add the texts to.
Returns:
List of ids from adding the texts into the vectorstore.
"""
if namespace is None:
namespace = self._namespace
docs = []
ids = ids or [str(uuid.uuid4()) for _ in texts] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,855 | Support index upsert parallelization for pinecone | ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | https://github.com/langchain-ai/langchain/issues/9855 | https://github.com/langchain-ai/langchain/pull/9859 | 16a27ab244e6b92d74c48c206e0e6f1b5d00e126 | 4765c097035b9ff722fa9bbb7c3dd4eb6aed933c | "2023-08-28T13:09:29Z" | python | "2023-09-03T22:37:41Z" | libs/langchain/langchain/vectorstores/pinecone.py | embeddings = self._embed_documents(texts)
for i, (text, embedding) in enumerate(zip(texts, embeddings)):
metadata = metadatas[i] if metadatas else {}
metadata[self._text_key] = text
docs.append((ids[i], embedding, metadata))
self._index.upsert(
vectors=docs, namespace=namespace, batch_size=batch_size, **kwargs
)
return ids
def similarity_search_with_score(
self,
query: str,
k: int = 4,
filter: Optional[dict] = None,
namespace: Optional[str] = None,
) -> List[Tuple[Document, float]]:
"""Return pinecone documents most similar to query, along with scores.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
filter: Dictionary of argument(s) to filter on metadata
namespace: Namespace to search in. Default will search in '' namespace.
Returns:
List of Documents most similar to the query and score for each
"""
return self.similarity_search_by_vector_with_score(
self._embed_query(query), k=k, filter=filter, namespace=namespace
)
def similarity_search_by_vector_with_score( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,855 | Support index upsert parallelization for pinecone | ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | https://github.com/langchain-ai/langchain/issues/9855 | https://github.com/langchain-ai/langchain/pull/9859 | 16a27ab244e6b92d74c48c206e0e6f1b5d00e126 | 4765c097035b9ff722fa9bbb7c3dd4eb6aed933c | "2023-08-28T13:09:29Z" | python | "2023-09-03T22:37:41Z" | libs/langchain/langchain/vectorstores/pinecone.py | self,
embedding: List[float],
*,
k: int = 4,
filter: Optional[dict] = None,
namespace: Optional[str] = None,
) -> List[Tuple[Document, float]]:
"""Return pinecone documents most similar to embedding, along with scores."""
if namespace is None:
namespace = self._namespace
docs = []
results = self._index.query(
[embedding],
top_k=k,
include_metadata=True,
namespace=namespace,
filter=filter,
)
for res in results["matches"]:
metadata = res["metadata"]
if self._text_key in metadata:
text = metadata.pop(self._text_key)
score = res["score"]
docs.append((Document(page_content=text, metadata=metadata), score))
else:
logger.warning(
f"Found document with no `{self._text_key}` key. Skipping."
)
return docs
def similarity_search( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,855 | Support index upsert parallelization for pinecone | ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | https://github.com/langchain-ai/langchain/issues/9855 | https://github.com/langchain-ai/langchain/pull/9859 | 16a27ab244e6b92d74c48c206e0e6f1b5d00e126 | 4765c097035b9ff722fa9bbb7c3dd4eb6aed933c | "2023-08-28T13:09:29Z" | python | "2023-09-03T22:37:41Z" | libs/langchain/langchain/vectorstores/pinecone.py | self,
query: str,
k: int = 4,
filter: Optional[dict] = None,
namespace: Optional[str] = None,
**kwargs: Any,
) -> List[Document]:
"""Return pinecone documents most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
filter: Dictionary of argument(s) to filter on metadata
namespace: Namespace to search in. Default will search in '' namespace.
Returns:
List of Documents most similar to the query and score for each
"""
docs_and_scores = self.similarity_search_with_score(
query, k=k, filter=filter, namespace=namespace, **kwargs
)
return [doc for doc, _ in docs_and_scores]
def _select_relevance_score_fn(self) -> Callable[[float], float]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,855 | Support index upsert parallelization for pinecone | ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | https://github.com/langchain-ai/langchain/issues/9855 | https://github.com/langchain-ai/langchain/pull/9859 | 16a27ab244e6b92d74c48c206e0e6f1b5d00e126 | 4765c097035b9ff722fa9bbb7c3dd4eb6aed933c | "2023-08-28T13:09:29Z" | python | "2023-09-03T22:37:41Z" | libs/langchain/langchain/vectorstores/pinecone.py | """
The 'correct' relevance function
may differ depending on a few things, including:
- the distance / similarity metric used by the VectorStore
- the scale of your embeddings (OpenAI's are unit normed. Many others are not!)
- embedding dimensionality
- etc.
"""
if self.distance_strategy == DistanceStrategy.COSINE:
return self._cosine_relevance_score_fn
elif self.distance_strategy == DistanceStrategy.MAX_INNER_PRODUCT:
return self._max_inner_product_relevance_score_fn
elif self.distance_strategy == DistanceStrategy.EUCLIDEAN_DISTANCE:
return self._euclidean_relevance_score_fn
else:
raise ValueError(
"Unknown distance strategy, must be cosine, max_inner_product "
"(dot product), or euclidean"
)
def max_marginal_relevance_search_by_vector(
self,
embedding: List[float], |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,855 | Support index upsert parallelization for pinecone | ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | https://github.com/langchain-ai/langchain/issues/9855 | https://github.com/langchain-ai/langchain/pull/9859 | 16a27ab244e6b92d74c48c206e0e6f1b5d00e126 | 4765c097035b9ff722fa9bbb7c3dd4eb6aed933c | "2023-08-28T13:09:29Z" | python | "2023-09-03T22:37:41Z" | libs/langchain/langchain/vectorstores/pinecone.py | k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
filter: Optional[dict] = None,
namespace: Optional[str] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
embedding: Embedding to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns:
List of Documents selected by maximal marginal relevance.
"""
if namespace is None:
namespace = self._namespace
results = self._index.query(
[embedding],
top_k=fetch_k,
include_values=True,
include_metadata=True,
namespace=namespace,
filter=filter, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,855 | Support index upsert parallelization for pinecone | ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | https://github.com/langchain-ai/langchain/issues/9855 | https://github.com/langchain-ai/langchain/pull/9859 | 16a27ab244e6b92d74c48c206e0e6f1b5d00e126 | 4765c097035b9ff722fa9bbb7c3dd4eb6aed933c | "2023-08-28T13:09:29Z" | python | "2023-09-03T22:37:41Z" | libs/langchain/langchain/vectorstores/pinecone.py | )
mmr_selected = maximal_marginal_relevance(
np.array([embedding], dtype=np.float32),
[item["values"] for item in results["matches"]],
k=k,
lambda_mult=lambda_mult,
)
selected = [results["matches"][i]["metadata"] for i in mmr_selected]
return [
Document(page_content=metadata.pop((self._text_key)), metadata=metadata)
for metadata in selected
]
def max_marginal_relevance_search(
self,
query: str,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
filter: Optional[dict] = None,
namespace: Optional[str] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,855 | Support index upsert parallelization for pinecone | ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | https://github.com/langchain-ai/langchain/issues/9855 | https://github.com/langchain-ai/langchain/pull/9859 | 16a27ab244e6b92d74c48c206e0e6f1b5d00e126 | 4765c097035b9ff722fa9bbb7c3dd4eb6aed933c | "2023-08-28T13:09:29Z" | python | "2023-09-03T22:37:41Z" | libs/langchain/langchain/vectorstores/pinecone.py | of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns:
List of Documents selected by maximal marginal relevance.
"""
embedding = self._embed_query(query)
return self.max_marginal_relevance_search_by_vector(
embedding, k, fetch_k, lambda_mult, filter, namespace
)
@classmethod
def from_texts(
cls,
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
batch_size: int = 32,
text_key: str = "text",
index_name: Optional[str] = None,
namespace: Optional[str] = None,
upsert_kwargs: Optional[dict] = None,
**kwargs: Any,
) -> Pinecone:
"""Construct Pinecone wrapper from raw documents.
This is a user friendly interface that:
1. Embeds documents.
2. Adds the documents to a provided Pinecone index
This is intended to be a quick way to get started.
Example: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,855 | Support index upsert parallelization for pinecone | ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | https://github.com/langchain-ai/langchain/issues/9855 | https://github.com/langchain-ai/langchain/pull/9859 | 16a27ab244e6b92d74c48c206e0e6f1b5d00e126 | 4765c097035b9ff722fa9bbb7c3dd4eb6aed933c | "2023-08-28T13:09:29Z" | python | "2023-09-03T22:37:41Z" | libs/langchain/langchain/vectorstores/pinecone.py | .. code-block:: python
from langchain import Pinecone
from langchain.embeddings import OpenAIEmbeddings
import pinecone
# The environment should be the one specified next to the API key
# in your Pinecone console
pinecone.init(api_key="***", environment="...")
embeddings = OpenAIEmbeddings()
pinecone = Pinecone.from_texts(
texts,
embeddings,
index_name="langchain-demo"
)
"""
try:
import pinecone
except ImportError:
raise ValueError(
"Could not import pinecone python package. "
"Please install it with `pip install pinecone-client`."
)
indexes = pinecone.list_indexes()
if index_name in indexes:
index = pinecone.Index(index_name)
elif len(indexes) == 0:
raise ValueError(
"No active indexes found in your Pinecone project, "
"are you sure you're using the right API key and environment?"
)
else: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,855 | Support index upsert parallelization for pinecone | ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | https://github.com/langchain-ai/langchain/issues/9855 | https://github.com/langchain-ai/langchain/pull/9859 | 16a27ab244e6b92d74c48c206e0e6f1b5d00e126 | 4765c097035b9ff722fa9bbb7c3dd4eb6aed933c | "2023-08-28T13:09:29Z" | python | "2023-09-03T22:37:41Z" | libs/langchain/langchain/vectorstores/pinecone.py | raise ValueError(
f"Index '{index_name}' not found in your Pinecone project. "
f"Did you mean one of the following indexes: {', '.join(indexes)}"
)
for i in range(0, len(texts), batch_size):
i_end = min(i + batch_size, len(texts))
lines_batch = texts[i:i_end]
if ids:
ids_batch = ids[i:i_end]
else:
ids_batch = [str(uuid.uuid4()) for n in range(i, i_end)]
embeds = embedding.embed_documents(lines_batch)
if metadatas:
metadata = metadatas[i:i_end]
else:
metadata = [{} for _ in range(i, i_end)]
for j, line in enumerate(lines_batch):
metadata[j][text_key] = line
to_upsert = zip(ids_batch, embeds, metadata)
_upsert_kwargs = upsert_kwargs or {}
index.upsert(vectors=list(to_upsert), namespace=namespace, **_upsert_kwargs)
return cls(index, embedding, text_key, namespace, **kwargs)
@classmethod
def from_existing_index( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,855 | Support index upsert parallelization for pinecone | ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | https://github.com/langchain-ai/langchain/issues/9855 | https://github.com/langchain-ai/langchain/pull/9859 | 16a27ab244e6b92d74c48c206e0e6f1b5d00e126 | 4765c097035b9ff722fa9bbb7c3dd4eb6aed933c | "2023-08-28T13:09:29Z" | python | "2023-09-03T22:37:41Z" | libs/langchain/langchain/vectorstores/pinecone.py | cls,
index_name: str,
embedding: Embeddings,
text_key: str = "text",
namespace: Optional[str] = None,
) -> Pinecone:
"""Load pinecone vectorstore from index name."""
try:
import pinecone
except ImportError:
raise ValueError(
"Could not import pinecone python package. "
"Please install it with `pip install pinecone-client`."
)
return cls(pinecone.Index(index_name), embedding, text_key, namespace)
def delete( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,855 | Support index upsert parallelization for pinecone | ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | https://github.com/langchain-ai/langchain/issues/9855 | https://github.com/langchain-ai/langchain/pull/9859 | 16a27ab244e6b92d74c48c206e0e6f1b5d00e126 | 4765c097035b9ff722fa9bbb7c3dd4eb6aed933c | "2023-08-28T13:09:29Z" | python | "2023-09-03T22:37:41Z" | libs/langchain/langchain/vectorstores/pinecone.py | self,
ids: Optional[List[str]] = None,
delete_all: Optional[bool] = None,
namespace: Optional[str] = None,
filter: Optional[dict] = None,
**kwargs: Any,
) -> None:
"""Delete by vector IDs or filter.
Args:
ids: List of ids to delete.
filter: Dictionary of conditions to filter vectors to delete.
"""
if namespace is None:
namespace = self._namespace
if delete_all:
self._index.delete(delete_all=True, namespace=namespace, **kwargs)
elif ids is not None:
chunk_size = 1000
for i in range(0, len(ids), chunk_size):
chunk = ids[i : i + chunk_size]
self._index.delete(ids=chunk, namespace=namespace, **kwargs)
elif filter is not None:
self._index.delete(filter=filter, namespace=namespace, **kwargs)
else:
raise ValueError("Either ids, delete_all, or filter must be provided.")
return None |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,855 | Support index upsert parallelization for pinecone | ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | https://github.com/langchain-ai/langchain/issues/9855 | https://github.com/langchain-ai/langchain/pull/9859 | 16a27ab244e6b92d74c48c206e0e6f1b5d00e126 | 4765c097035b9ff722fa9bbb7c3dd4eb6aed933c | "2023-08-28T13:09:29Z" | python | "2023-09-03T22:37:41Z" | libs/langchain/tests/integration_tests/vectorstores/test_pinecone.py | import importlib
import os
import time
import uuid
from typing import TYPE_CHECKING, List
import numpy as np
import pytest
from langchain.docstore.document import Document
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores.pinecone import Pinecone
if TYPE_CHECKING:
import pinecone
index_name = "langchain-test-index"
namespace_name = "langchain-test-namespace"
dimension = 1536
def reset_pinecone() -> None:
assert os.environ.get("PINECONE_API_KEY") is not None
assert os.environ.get("PINECONE_ENVIRONMENT") is not None
import pinecone
importlib.reload(pinecone)
pinecone.init(
api_key=os.environ.get("PINECONE_API_KEY"),
environment=os.environ.get("PINECONE_ENVIRONMENT"),
)
class TestPinecone:
index: "pinecone.Index"
@classmethod
def setup_class(cls) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,855 | Support index upsert parallelization for pinecone | ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | https://github.com/langchain-ai/langchain/issues/9855 | https://github.com/langchain-ai/langchain/pull/9859 | 16a27ab244e6b92d74c48c206e0e6f1b5d00e126 | 4765c097035b9ff722fa9bbb7c3dd4eb6aed933c | "2023-08-28T13:09:29Z" | python | "2023-09-03T22:37:41Z" | libs/langchain/tests/integration_tests/vectorstores/test_pinecone.py | import pinecone
reset_pinecone()
cls.index = pinecone.Index(index_name)
if index_name in pinecone.list_indexes():
index_stats = cls.index.describe_index_stats()
if index_stats["dimension"] == dimension:
index_stats = cls.index.describe_index_stats()
for _namespace_name in index_stats["namespaces"].keys():
cls.index.delete(delete_all=True, namespace=_namespace_name)
else:
pinecone.delete_index(index_name)
pinecone.create_index(name=index_name, dimension=dimension)
else:
pinecone.create_index(name=index_name, dimension=dimension)
index_stats = cls.index.describe_index_stats()
assert index_stats["dimension"] == dimension
if index_stats["namespaces"].get(namespace_name) is not None:
assert index_stats["namespaces"][namespace_name]["vector_count"] == 0
@classmethod
def teardown_class(cls) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,855 | Support index upsert parallelization for pinecone | ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | https://github.com/langchain-ai/langchain/issues/9855 | https://github.com/langchain-ai/langchain/pull/9859 | 16a27ab244e6b92d74c48c206e0e6f1b5d00e126 | 4765c097035b9ff722fa9bbb7c3dd4eb6aed933c | "2023-08-28T13:09:29Z" | python | "2023-09-03T22:37:41Z" | libs/langchain/tests/integration_tests/vectorstores/test_pinecone.py | index_stats = cls.index.describe_index_stats()
for _namespace_name in index_stats["namespaces"].keys():
cls.index.delete(delete_all=True, namespace=_namespace_name)
reset_pinecone()
@pytest.fixture(autouse=True)
def setup(self) -> None:
index_stats = self.index.describe_index_stats()
for _namespace_name in index_stats["namespaces"].keys():
self.index.delete(delete_all=True, namespace=_namespace_name)
reset_pinecone()
@pytest.mark.vcr()
def test_from_texts( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,855 | Support index upsert parallelization for pinecone | ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | https://github.com/langchain-ai/langchain/issues/9855 | https://github.com/langchain-ai/langchain/pull/9859 | 16a27ab244e6b92d74c48c206e0e6f1b5d00e126 | 4765c097035b9ff722fa9bbb7c3dd4eb6aed933c | "2023-08-28T13:09:29Z" | python | "2023-09-03T22:37:41Z" | libs/langchain/tests/integration_tests/vectorstores/test_pinecone.py | self, texts: List[str], embedding_openai: OpenAIEmbeddings
) -> None:
"""Test end to end construction and search."""
unique_id = uuid.uuid4().hex
needs = f"foobuu {unique_id} booo"
texts.insert(0, needs)
docsearch = Pinecone.from_texts(
texts=texts,
embedding=embedding_openai,
index_name=index_name,
namespace=namespace_name,
)
output = docsearch.similarity_search(unique_id, k=1, namespace=namespace_name)
assert output == [Document(page_content=needs)]
@pytest.mark.vcr()
def test_from_texts_with_metadatas(
self, texts: List[str], embedding_openai: OpenAIEmbeddings
) -> None:
"""Test end to end construction and search."""
unique_id = uuid.uuid4().hex
needs = f"foobuu {unique_id} booo"
texts.insert(0, needs)
metadatas = [{"page": i} for i in range(len(texts))]
docsearch = Pinecone.from_texts(
texts,
embedding_openai,
index_name=index_name,
metadatas=metadatas,
namespace=namespace_name, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,855 | Support index upsert parallelization for pinecone | ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | https://github.com/langchain-ai/langchain/issues/9855 | https://github.com/langchain-ai/langchain/pull/9859 | 16a27ab244e6b92d74c48c206e0e6f1b5d00e126 | 4765c097035b9ff722fa9bbb7c3dd4eb6aed933c | "2023-08-28T13:09:29Z" | python | "2023-09-03T22:37:41Z" | libs/langchain/tests/integration_tests/vectorstores/test_pinecone.py | )
output = docsearch.similarity_search(needs, k=1, namespace=namespace_name)
assert output == [Document(page_content=needs, metadata={"page": 0.0})]
@pytest.mark.vcr()
def test_from_texts_with_scores(self, embedding_openai: OpenAIEmbeddings) -> None:
"""Test end to end construction and search with scores and IDs."""
texts = ["foo", "bar", "baz"]
metadatas = [{"page": i} for i in range(len(texts))]
docsearch = Pinecone.from_texts(
texts,
embedding_openai,
index_name=index_name,
metadatas=metadatas,
namespace=namespace_name,
)
output = docsearch.similarity_search_with_score(
"foo", k=3, namespace=namespace_name
)
docs = [o[0] for o in output]
scores = [o[1] for o in output]
sorted_documents = sorted(docs, key=lambda x: x.metadata["page"])
assert sorted_documents == [
Document(page_content="foo", metadata={"page": 0.0}),
Document(page_content="bar", metadata={"page": 1.0}),
Document(page_content="baz", metadata={"page": 2.0}),
]
assert scores[0] > scores[1] > scores[2]
def test_from_existing_index_with_namespaces( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,855 | Support index upsert parallelization for pinecone | ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | https://github.com/langchain-ai/langchain/issues/9855 | https://github.com/langchain-ai/langchain/pull/9859 | 16a27ab244e6b92d74c48c206e0e6f1b5d00e126 | 4765c097035b9ff722fa9bbb7c3dd4eb6aed933c | "2023-08-28T13:09:29Z" | python | "2023-09-03T22:37:41Z" | libs/langchain/tests/integration_tests/vectorstores/test_pinecone.py | self, embedding_openai: OpenAIEmbeddings
) -> None:
"""Test that namespaces are properly handled."""
texts_1 = ["foo", "bar", "baz"]
metadatas = [{"page": i} for i in range(len(texts_1))]
Pinecone.from_texts(
texts_1,
embedding_openai,
index_name=index_name,
metadatas=metadatas,
namespace=f"{index_name}-1",
)
texts_2 = ["foo2", "bar2", "baz2"]
metadatas = [{"page": i} for i in range(len(texts_2))]
Pinecone.from_texts(
texts_2,
embedding_openai,
index_name=index_name,
metadatas=metadatas,
namespace=f"{index_name}-2",
)
docsearch = Pinecone.from_existing_index(
index_name=index_name,
embedding=embedding_openai,
namespace=f"{index_name}-1",
)
output = docsearch.similarity_search("foo", k=20, namespace=f"{index_name}-1") |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,855 | Support index upsert parallelization for pinecone | ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | https://github.com/langchain-ai/langchain/issues/9855 | https://github.com/langchain-ai/langchain/pull/9859 | 16a27ab244e6b92d74c48c206e0e6f1b5d00e126 | 4765c097035b9ff722fa9bbb7c3dd4eb6aed933c | "2023-08-28T13:09:29Z" | python | "2023-09-03T22:37:41Z" | libs/langchain/tests/integration_tests/vectorstores/test_pinecone.py | page_contents = sorted(set([o.page_content for o in output]))
assert all(content in ["foo", "bar", "baz"] for content in page_contents)
assert all(content not in ["foo2", "bar2", "baz2"] for content in page_contents)
def test_add_documents_with_ids(
self, texts: List[str], embedding_openai: OpenAIEmbeddings
) -> None:
ids = [uuid.uuid4().hex for _ in range(len(texts))]
Pinecone.from_texts(
texts=texts,
ids=ids,
embedding=embedding_openai,
index_name=index_name,
namespace=index_name,
)
index_stats = self.index.describe_index_stats()
assert index_stats["namespaces"][index_name]["vector_count"] == len(texts)
ids_1 = [uuid.uuid4().hex for _ in range(len(texts))]
Pinecone.from_texts(
texts=texts,
ids=ids_1,
embedding=embedding_openai,
index_name=index_name,
namespace=index_name,
)
index_stats = self.index.describe_index_stats()
assert index_stats["namespaces"][index_name]["vector_count"] == len(texts) * 2
assert index_stats["total_vector_count"] == len(texts) * 2
@pytest.mark.vcr()
def test_relevance_score_bound(self, embedding_openai: OpenAIEmbeddings) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,855 | Support index upsert parallelization for pinecone | ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | https://github.com/langchain-ai/langchain/issues/9855 | https://github.com/langchain-ai/langchain/pull/9859 | 16a27ab244e6b92d74c48c206e0e6f1b5d00e126 | 4765c097035b9ff722fa9bbb7c3dd4eb6aed933c | "2023-08-28T13:09:29Z" | python | "2023-09-03T22:37:41Z" | libs/langchain/tests/integration_tests/vectorstores/test_pinecone.py | """Ensures all relevance scores are between 0 and 1."""
texts = ["foo", "bar", "baz"]
metadatas = [{"page": i} for i in range(len(texts))]
docsearch = Pinecone.from_texts(
texts,
embedding_openai,
index_name=index_name,
metadatas=metadatas,
)
time.sleep(20)
output = docsearch.similarity_search_with_relevance_scores("foo", k=3)
assert all(
(1 >= score or np.isclose(score, 1)) and score >= 0 for _, score in output
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,019 | fix: Loading documents from a Youtube Url | ### System Info
MacOS M2 13.4.1 (22F82)
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce behaviour:
1. Run the [tutorial](https://python.langchain.com/docs/integrations/document_loaders/youtube_audio) with the default parameters `save_dir = "~/Downloads/YouTube"`
2. After calling `docs = loader.load()` the docs will be empty
I have implemented a dummy fix for the interim.
The error is here in this file: from langchain.document_loaders.blob_loaders.youtube_audio import YoutubeAudioLoader
`YouTubeAudioLoader.yield_blobs` method
loader = FileSystemBlobLoader(self.save_dir, glob="*.m4a")
```
# This doesn't always work (MacOS)
loader = FileSystemBlobLoader(self.save_dir, glob="*.m4a")
```
The reason it doesn't work is that it's trying to use ~/Downloads/YouTube.
The fix I propose is either:
- Use the FULL file path in `save_dir` in the tutorial.
- Replace the problematic line with this, so that it finds the actual directory, even if you prefer to use `~` for specifying file paths.
```
loader = FileSystemBlobLoader(os.path.expanduser(self.save_dir), glob="*.m4a")
```
### Expected behavior
There should be documents in the loader.load() variable.
### My Fix
```
# Yield the written blobs
"""
you could fix save_dir like this...
(old)
save_dir = "~/Downloads/YouTube"
(new)
"/Users/shawnesquivel/Downloads/YouTube"
"""
# This doesn't always work (MacOS)
loader = FileSystemBlobLoader(self.save_dir, glob="*.m4a")
# This works
loader = FileSystemBlobLoader(os.path.expanduser(self.save_dir), glob="*.m4a")
```
| https://github.com/langchain-ai/langchain/issues/10019 | https://github.com/langchain-ai/langchain/pull/10133 | 31bbe807583b4a53c9fd2fa98d8b4d1fe185ba40 | e0f6ba08d6ad86226552d906e397a6a21f1904d0 | "2023-08-31T03:19:25Z" | python | "2023-09-04T07:21:33Z" | libs/langchain/langchain/document_loaders/blob_loaders/file_system.py | """Use to load blobs from the local file system."""
from pathlib import Path
from typing import Callable, Iterable, Iterator, Optional, Sequence, TypeVar, Union
from langchain.document_loaders.blob_loaders.schema import Blob, BlobLoader
T = TypeVar("T")
def _make_iterator(
length_func: Callable[[], int], show_progress: bool = False
) -> Callable[[Iterable[T]], Iterator[T]]:
"""Create a function that optionally wraps an iterable in tqdm."""
if show_progress:
try:
from tqdm.auto import tqdm
except ImportError:
raise ImportError(
"You must install tqdm to use show_progress=True."
"You can install tqdm with `pip install tqdm`."
)
def _with_tqdm(iterable: Iterable[T]) -> Iterator[T]:
"""Wrap an iterable in a tqdm progress bar."""
return tqdm(iterable, total=length_func())
iterator = _with_tqdm
else:
iterator = iter
return iterator
class FileSystemBlobLoader(BlobLoader): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,019 | fix: Loading documents from a Youtube Url | ### System Info
MacOS M2 13.4.1 (22F82)
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce behaviour:
1. Run the [tutorial](https://python.langchain.com/docs/integrations/document_loaders/youtube_audio) with the default parameters `save_dir = "~/Downloads/YouTube"`
2. After calling `docs = loader.load()` the docs will be empty
I have implemented a dummy fix for the interim.
The error is here in this file: from langchain.document_loaders.blob_loaders.youtube_audio import YoutubeAudioLoader
`YouTubeAudioLoader.yield_blobs` method
loader = FileSystemBlobLoader(self.save_dir, glob="*.m4a")
```
# This doesn't always work (MacOS)
loader = FileSystemBlobLoader(self.save_dir, glob="*.m4a")
```
The reason it doesn't work is that it's trying to use ~/Downloads/YouTube.
The fix I propose is either:
- Use the FULL file path in `save_dir` in the tutorial.
- Replace the problematic line with this, so that it finds the actual directory, even if you prefer to use `~` for specifying file paths.
```
loader = FileSystemBlobLoader(os.path.expanduser(self.save_dir), glob="*.m4a")
```
### Expected behavior
There should be documents in the loader.load() variable.
### My Fix
```
# Yield the written blobs
"""
you could fix save_dir like this...
(old)
save_dir = "~/Downloads/YouTube"
(new)
"/Users/shawnesquivel/Downloads/YouTube"
"""
# This doesn't always work (MacOS)
loader = FileSystemBlobLoader(self.save_dir, glob="*.m4a")
# This works
loader = FileSystemBlobLoader(os.path.expanduser(self.save_dir), glob="*.m4a")
```
| https://github.com/langchain-ai/langchain/issues/10019 | https://github.com/langchain-ai/langchain/pull/10133 | 31bbe807583b4a53c9fd2fa98d8b4d1fe185ba40 | e0f6ba08d6ad86226552d906e397a6a21f1904d0 | "2023-08-31T03:19:25Z" | python | "2023-09-04T07:21:33Z" | libs/langchain/langchain/document_loaders/blob_loaders/file_system.py | """Load blobs in the local file system.
Example:
.. code-block:: python
from langchain.document_loaders.blob_loaders import FileSystemBlobLoader
loader = FileSystemBlobLoader("/path/to/directory")
for blob in loader.yield_blobs():
print(blob)
"""
def __init__(
self,
path: Union[str, Path],
*,
glob: str = "**/[!.]*",
exclude: Sequence[str] = (),
suffixes: Optional[Sequence[str]] = None,
show_progress: bool = False,
) -> None:
"""Initialize with a path to directory and how to glob over it.
Args:
path: Path to directory to load from
glob: Glob pattern relative to the specified path
by default set to pick up all non-hidden files
exclude: patterns to exclude from results, use glob syntax
suffixes: Provide to keep only files with these suffixes
Useful when wanting to keep files with different suffixes
Suffixes must include the dot, e.g. ".txt" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,019 | fix: Loading documents from a Youtube Url | ### System Info
MacOS M2 13.4.1 (22F82)
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce behaviour:
1. Run the [tutorial](https://python.langchain.com/docs/integrations/document_loaders/youtube_audio) with the default parameters `save_dir = "~/Downloads/YouTube"`
2. After calling `docs = loader.load()` the docs will be empty
I have implemented a dummy fix for the interim.
The error is here in this file: from langchain.document_loaders.blob_loaders.youtube_audio import YoutubeAudioLoader
`YouTubeAudioLoader.yield_blobs` method
loader = FileSystemBlobLoader(self.save_dir, glob="*.m4a")
```
# This doesn't always work (MacOS)
loader = FileSystemBlobLoader(self.save_dir, glob="*.m4a")
```
The reason it doesn't work is that it's trying to use ~/Downloads/YouTube.
The fix I propose is either:
- Use the FULL file path in `save_dir` in the tutorial.
- Replace the problematic line with this, so that it finds the actual directory, even if you prefer to use `~` for specifying file paths.
```
loader = FileSystemBlobLoader(os.path.expanduser(self.save_dir), glob="*.m4a")
```
### Expected behavior
There should be documents in the loader.load() variable.
### My Fix
```
# Yield the written blobs
"""
you could fix save_dir like this...
(old)
save_dir = "~/Downloads/YouTube"
(new)
"/Users/shawnesquivel/Downloads/YouTube"
"""
# This doesn't always work (MacOS)
loader = FileSystemBlobLoader(self.save_dir, glob="*.m4a")
# This works
loader = FileSystemBlobLoader(os.path.expanduser(self.save_dir), glob="*.m4a")
```
| https://github.com/langchain-ai/langchain/issues/10019 | https://github.com/langchain-ai/langchain/pull/10133 | 31bbe807583b4a53c9fd2fa98d8b4d1fe185ba40 | e0f6ba08d6ad86226552d906e397a6a21f1904d0 | "2023-08-31T03:19:25Z" | python | "2023-09-04T07:21:33Z" | libs/langchain/langchain/document_loaders/blob_loaders/file_system.py | show_progress: If true, will show a progress bar as the files are loaded.
This forces an iteration through all matching files
to count them prior to loading them.
Examples:
.. code-block:: python
# Recursively load all text files in a directory.
loader = FileSystemBlobLoader("/path/to/directory", glob="**/*.txt")
# Recursively load all non-hidden files in a directory.
loader = FileSystemBlobLoader("/path/to/directory", glob="**/[!.]*")
# Load all files in a directory without recursion.
loader = FileSystemBlobLoader("/path/to/directory", glob="*")
# Recursively load all files in a directory, except for py or pyc files.
loader = FileSystemBlobLoader(
"/path/to/directory",
glob="**/*.txt",
exclude=["**/*.py", "**/*.pyc"]
)
"""
if isinstance(path, Path):
_path = path
elif isinstance(path, str):
_path = Path(path)
else:
raise TypeError(f"Expected str or Path, got {type(path)}")
self.path = _path
self.glob = glob
self.suffixes = set(suffixes or [])
self.show_progress = show_progress
self.exclude = exclude
def yield_blobs( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,019 | fix: Loading documents from a Youtube Url | ### System Info
MacOS M2 13.4.1 (22F82)
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce behaviour:
1. Run the [tutorial](https://python.langchain.com/docs/integrations/document_loaders/youtube_audio) with the default parameters `save_dir = "~/Downloads/YouTube"`
2. After calling `docs = loader.load()` the docs will be empty
I have implemented a dummy fix for the interim.
The error is here in this file: from langchain.document_loaders.blob_loaders.youtube_audio import YoutubeAudioLoader
`YouTubeAudioLoader.yield_blobs` method
loader = FileSystemBlobLoader(self.save_dir, glob="*.m4a")
```
# This doesn't always work (MacOS)
loader = FileSystemBlobLoader(self.save_dir, glob="*.m4a")
```
The reason it doesn't work is that it's trying to use ~/Downloads/YouTube.
The fix I propose is either:
- Use the FULL file path in `save_dir` in the tutorial.
- Replace the problematic line with this, so that it finds the actual directory, even if you prefer to use `~` for specifying file paths.
```
loader = FileSystemBlobLoader(os.path.expanduser(self.save_dir), glob="*.m4a")
```
### Expected behavior
There should be documents in the loader.load() variable.
### My Fix
```
# Yield the written blobs
"""
you could fix save_dir like this...
(old)
save_dir = "~/Downloads/YouTube"
(new)
"/Users/shawnesquivel/Downloads/YouTube"
"""
# This doesn't always work (MacOS)
loader = FileSystemBlobLoader(self.save_dir, glob="*.m4a")
# This works
loader = FileSystemBlobLoader(os.path.expanduser(self.save_dir), glob="*.m4a")
```
| https://github.com/langchain-ai/langchain/issues/10019 | https://github.com/langchain-ai/langchain/pull/10133 | 31bbe807583b4a53c9fd2fa98d8b4d1fe185ba40 | e0f6ba08d6ad86226552d906e397a6a21f1904d0 | "2023-08-31T03:19:25Z" | python | "2023-09-04T07:21:33Z" | libs/langchain/langchain/document_loaders/blob_loaders/file_system.py | self,
) -> Iterable[Blob]:
"""Yield blobs that match the requested pattern."""
iterator = _make_iterator(
length_func=self.count_matching_files, show_progress=self.show_progress
)
for path in iterator(self._yield_paths()):
yield Blob.from_path(path)
def _yield_paths(self) -> Iterable[Path]:
"""Yield paths that match the requested pattern."""
paths = self.path.glob(self.glob)
for path in paths:
if self.exclude:
if any(path.match(glob) for glob in self.exclude):
continue
if path.is_file():
if self.suffixes and path.suffix not in self.suffixes:
continue
yield path
def count_matching_files(self) -> int:
"""Count files that match the pattern without loading them."""
num = 0
for _ in self._yield_paths():
num += 1
return num |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | """Base implementation for tools or skills."""
from __future__ import annotations
import asyncio
import inspect
import warnings
from abc import abstractmethod
from functools import partial
from inspect import signature
from typing import Any, Awaitable, Callable, Dict, List, Optional, Tuple, Type, Union
from langchain.callbacks.base import BaseCallbackManager
from langchain.callbacks.manager import (
AsyncCallbackManager,
AsyncCallbackManagerForToolRun,
CallbackManager,
CallbackManagerForToolRun,
Callbacks,
)
from langchain.pydantic_v1 import (
BaseModel,
Extra,
Field,
create_model,
root_validator,
validate_arguments,
)
from langchain.schema.runnable import Runnable, RunnableConfig
class SchemaAnnotationError(TypeError): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | """Raised when 'args_schema' is missing or has an incorrect type annotation."""
def _create_subset_model(
name: str, model: BaseModel, field_names: list
) -> Type[BaseModel]:
"""Create a pydantic model with only a subset of model's fields."""
fields = {}
for field_name in field_names:
field = model.__fields__[field_name]
fields[field_name] = (field.outer_type_, field.field_info)
return create_model(name, **fields)
def _get_filtered_args(
inferred_model: Type[BaseModel],
func: Callable,
) -> dict:
"""Get the arguments from a function's signature."""
schema = inferred_model.schema()["properties"]
valid_keys = signature(func).parameters
return {k: schema[k] for k in valid_keys if k not in ("run_manager", "callbacks")}
class _SchemaConfig: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | """Configuration for the pydantic model."""
extra: Any = Extra.forbid
arbitrary_types_allowed: bool = True
def create_schema_from_function(
model_name: str,
func: Callable,
) -> Type[BaseModel]:
"""Create a pydantic schema from a function's signature.
Args:
model_name: Name to assign to the generated pydandic schema
func: Function to generate the schema from
Returns:
A pydantic model with the same arguments as the function
"""
validated = validate_arguments(func, config=_SchemaConfig)
inferred_model = validated.model
if "run_manager" in inferred_model.__fields__:
del inferred_model.__fields__["run_manager"]
if "callbacks" in inferred_model.__fields__:
del inferred_model.__fields__["callbacks"]
valid_properties = _get_filtered_args(inferred_model, func)
return _create_subset_model(
f"{model_name}Schema", inferred_model, list(valid_properties)
)
class ToolException(Exception): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | """An optional exception that tool throws when execution error occurs.
When this exception is thrown, the agent will not stop working,
but will handle the exception according to the handle_tool_error
variable of the tool, and the processing result will be returned
to the agent as observation, and printed in red on the console.
"""
pass
class BaseTool(BaseModel, Runnable[Union[str, Dict], Any]):
"""Interface LangChain tools must implement."""
def __init_subclass__(cls, **kwargs: Any) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | """Create the definition of the new tool class."""
super().__init_subclass__(**kwargs)
args_schema_type = cls.__annotations__.get("args_schema", None)
if args_schema_type is not None:
if args_schema_type is None or args_schema_type == BaseModel:
typehint_mandate = """
class ChildTool(BaseTool):
...
args_schema: Type[BaseModel] = SchemaClass
..."""
name = cls.__name__
raise SchemaAnnotationError(
f"Tool definition for {name} must include valid type annotations"
f" for argument 'args_schema' to behave as expected.\n"
f"Expected annotation of 'Type[BaseModel]'"
f" but got '{args_schema_type}'.\n"
f"Expected class looks like:\n"
f"{typehint_mandate}"
)
name: str
"""The unique name of the tool that clearly communicates its purpose."""
description: str
"""Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
""" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | args_schema: Optional[Type[BaseModel]] = None
"""Pydantic model class to validate and parse the tool's input arguments."""
return_direct: bool = False
"""Whether to return the tool's output directly. Setting this to True means
that after the tool is called, the AgentExecutor will stop looping.
"""
verbose: bool = False
"""Whether to log the tool's progress."""
callbacks: Callbacks = Field(default=None, exclude=True)
"""Callbacks to be called during tool execution."""
callback_manager: Optional[BaseCallbackManager] = Field(default=None, exclude=True)
"""Deprecated. Please use callbacks instead."""
tags: Optional[List[str]] = None
"""Optional list of tags associated with the tool. Defaults to None
These tags will be associated with each call to this tool,
and passed as arguments to the handlers defined in `callbacks`.
You can use these to eg identify a specific instance of a tool with its use case.
"""
metadata: Optional[Dict[str, Any]] = None
"""Optional metadata associated with the tool. Defaults to None
This metadata will be associated with each call to this tool,
and passed as arguments to the handlers defined in `callbacks`.
You can use these to eg identify a specific instance of a tool with its use case.
"""
handle_tool_error: Optional[
Union[bool, str, Callable[[ToolException], str]]
] = False
"""Handle the content of the ToolException thrown."""
class Config: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | """Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
@property
def is_single_input(self) -> bool: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | """Whether the tool only accepts a single input."""
keys = {k for k in self.args if k != "kwargs"}
return len(keys) == 1
@property
def args(self) -> dict:
if self.args_schema is not None:
return self.args_schema.schema()["properties"]
else:
schema = create_schema_from_function(self.name, self._run)
return schema.schema()["properties"]
def invoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
config = config or {}
return self.run(
input,
callbacks=config.get("callbacks"),
tags=config.get("tags"),
metadata=config.get("metadata"),
**kwargs,
)
async def ainvoke( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if type(self)._arun == BaseTool._arun:
return super().ainvoke(input, config, **kwargs)
config = config or {}
return await self.arun(
input,
callbacks=config.get("callbacks"),
tags=config.get("tags"),
metadata=config.get("metadata"),
**kwargs,
)
def _parse_input( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | self,
tool_input: Union[str, Dict],
) -> Union[str, Dict[str, Any]]:
"""Convert tool input to pydantic model."""
input_args = self.args_schema
if isinstance(tool_input, str):
if input_args is not None:
key_ = next(iter(input_args.__fields__.keys()))
input_args.validate({key_: tool_input})
return tool_input
else:
if input_args is not None:
result = input_args.parse_obj(tool_input)
return {k: v for k, v in result.dict().items() if k in tool_input}
return tool_input
@root_validator()
def raise_deprecation(cls, values: Dict) -> Dict:
"""Raise deprecation warning if callback_manager is used."""
if values.get("callback_manager") is not None:
warnings.warn(
"callback_manager is deprecated. Please use callbacks instead.",
DeprecationWarning,
)
values["callbacks"] = values.pop("callback_manager", None)
return values
@abstractmethod
def _run( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | self,
*args: Any,
**kwargs: Any,
) -> Any:
"""Use the tool.
Add run_manager: Optional[CallbackManagerForToolRun] = None
to child implementations to enable tracing,
"""
async def _arun(
self,
*args: Any,
**kwargs: Any,
) -> Any:
"""Use the tool asynchronously.
Add run_manager: Optional[AsyncCallbackManagerForToolRun] = None
to child implementations to enable tracing,
"""
return await asyncio.get_running_loop().run_in_executor(
None,
partial(self._run, **kwargs),
*args,
)
def _to_args_and_kwargs(self, tool_input: Union[str, Dict]) -> Tuple[Tuple, Dict]:
if isinstance(tool_input, str):
return (tool_input,), {}
else:
return (), tool_input
def run( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | self,
tool_input: Union[str, Dict],
verbose: Optional[bool] = None,
start_color: Optional[str] = "green",
color: Optional[str] = "green",
callbacks: Callbacks = None,
*,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> Any:
"""Run the tool."""
parsed_input = self._parse_input(tool_input)
if not self.verbose and verbose is not None:
verbose_ = verbose
else:
verbose_ = self.verbose
callback_manager = CallbackManager.configure(
callbacks,
self.callbacks,
verbose_,
tags,
self.tags,
metadata,
self.metadata,
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | new_arg_supported = signature(self._run).parameters.get("run_manager")
run_manager = callback_manager.on_tool_start(
{"name": self.name, "description": self.description},
tool_input if isinstance(tool_input, str) else str(tool_input),
color=start_color,
**kwargs,
)
try:
tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
observation = (
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
if new_arg_supported
else self._run(*tool_args, **tool_kwargs)
)
except ToolException as e:
if not self.handle_tool_error:
run_manager.on_tool_error(e)
raise e
elif isinstance(self.handle_tool_error, bool):
if e.args:
observation = e.args[0]
else:
observation = "Tool execution error"
elif isinstance(self.handle_tool_error, str):
observation = self.handle_tool_error
elif callable(self.handle_tool_error):
observation = self.handle_tool_error(e)
else:
raise ValueError(
f"Got unexpected type of `handle_tool_error`. Expected bool, str " |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | f"or callable. Received: {self.handle_tool_error}"
)
run_manager.on_tool_end(
str(observation), color="red", name=self.name, **kwargs
)
return observation
except (Exception, KeyboardInterrupt) as e:
run_manager.on_tool_error(e)
raise e
else:
run_manager.on_tool_end(
str(observation), color=color, name=self.name, **kwargs
)
return observation
async def arun(
self,
tool_input: Union[str, Dict],
verbose: Optional[bool] = None,
start_color: Optional[str] = "green",
color: Optional[str] = "green",
callbacks: Callbacks = None,
*,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> Any:
"""Run the tool asynchronously."""
parsed_input = self._parse_input(tool_input)
if not self.verbose and verbose is not None:
verbose_ = verbose |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | else:
verbose_ = self.verbose
callback_manager = AsyncCallbackManager.configure(
callbacks,
self.callbacks,
verbose_,
tags,
self.tags,
metadata,
self.metadata,
)
new_arg_supported = signature(self._arun).parameters.get("run_manager")
run_manager = await callback_manager.on_tool_start(
{"name": self.name, "description": self.description},
tool_input if isinstance(tool_input, str) else str(tool_input),
color=start_color,
**kwargs,
)
try:
tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
observation = (
await self._arun(*tool_args, run_manager=run_manager, **tool_kwargs)
if new_arg_supported
else await self._arun(*tool_args, **tool_kwargs)
)
except ToolException as e:
if not self.handle_tool_error:
await run_manager.on_tool_error(e)
raise e |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | elif isinstance(self.handle_tool_error, bool):
if e.args:
observation = e.args[0]
else:
observation = "Tool execution error"
elif isinstance(self.handle_tool_error, str):
observation = self.handle_tool_error
elif callable(self.handle_tool_error):
observation = self.handle_tool_error(e)
else:
raise ValueError(
f"Got unexpected type of `handle_tool_error`. Expected bool, str "
f"or callable. Received: {self.handle_tool_error}"
)
await run_manager.on_tool_end(
str(observation), color="red", name=self.name, **kwargs
)
return observation
except (Exception, KeyboardInterrupt) as e:
await run_manager.on_tool_error(e)
raise e
else:
await run_manager.on_tool_end(
str(observation), color=color, name=self.name, **kwargs
)
return observation
def __call__(self, tool_input: str, callbacks: Callbacks = None) -> str:
"""Make tool callable."""
return self.run(tool_input, callbacks=callbacks)
class Tool(BaseTool): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | """Tool that takes in function or coroutine directly."""
description: str = ""
func: Optional[Callable[..., str]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[str]]] = None
"""The asynchronous version of the function."""
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
@property
def args(self) -> dict:
"""The tool's input arguments."""
if self.args_schema is not None:
return self.args_schema.schema()["properties"]
return {"tool_input": {"type": "string"}}
def _to_args_and_kwargs(self, tool_input: Union[str, Dict]) -> Tuple[Tuple, Dict]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | """Convert tool input to pydantic model."""
args, kwargs = super()._to_args_and_kwargs(tool_input)
all_args = list(args) + list(kwargs.values())
if len(all_args) != 1:
raise ToolException(
f"Too many arguments to single-input tool {self.name}."
f" Args: {all_args}"
)
return tuple(all_args), {}
def _run(
self,
*args: Any,
run_manager: Optional[CallbackManagerForToolRun] = None,
**kwargs: Any,
) -> Any:
"""Use the tool."""
if self.func:
new_argument_supported = signature(self.func).parameters.get("callbacks")
return (
self.func(
*args,
callbacks=run_manager.get_child() if run_manager else None,
**kwargs,
)
if new_argument_supported
else self.func(*args, **kwargs)
)
raise NotImplementedError("Tool does not support sync")
async def _arun( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | self,
*args: Any,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
**kwargs: Any,
) -> Any:
"""Use the tool asynchronously."""
if self.coroutine:
new_argument_supported = signature(self.coroutine).parameters.get(
"callbacks"
)
return (
await self.coroutine(
*args,
callbacks=run_manager.get_child() if run_manager else None,
**kwargs,
)
if new_argument_supported
else await self.coroutine(*args, **kwargs)
)
else:
return await asyncio.get_running_loop().run_in_executor(
None, partial(self._run, run_manager=run_manager, **kwargs), *args
)
def __init__( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | self, name: str, func: Optional[Callable], description: str, **kwargs: Any
) -> None:
"""Initialize tool."""
super(Tool, self).__init__(
name=name, func=func, description=description, **kwargs
)
@classmethod
def from_function( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | cls,
func: Optional[Callable],
name: str,
description: str,
return_direct: bool = False,
args_schema: Optional[Type[BaseModel]] = None,
coroutine: Optional[
Callable[..., Awaitable[Any]]
] = None,
**kwargs: Any,
) -> Tool:
"""Initialize tool from a function."""
if func is None and coroutine is None:
raise ValueError("Function and/or coroutine must be provided")
return cls(
name=name,
func=func,
coroutine=coroutine,
description=description,
return_direct=return_direct,
args_schema=args_schema,
**kwargs,
)
class StructuredTool(BaseTool): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | """Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
@property
def args(self) -> dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | """The tool's input arguments."""
return self.args_schema.schema()["properties"]
def _run(
self,
*args: Any,
run_manager: Optional[CallbackManagerForToolRun] = None,
**kwargs: Any,
) -> Any:
"""Use the tool."""
if self.func:
new_argument_supported = signature(self.func).parameters.get("callbacks")
return (
self.func(
*args,
callbacks=run_manager.get_child() if run_manager else None,
**kwargs,
)
if new_argument_supported
else self.func(*args, **kwargs)
)
raise NotImplementedError("Tool does not support sync")
async def _arun( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | self,
*args: Any,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
**kwargs: Any,
) -> str:
"""Use the tool asynchronously."""
if self.coroutine:
new_argument_supported = signature(self.coroutine).parameters.get(
"callbacks"
)
return (
await self.coroutine(
*args,
callbacks=run_manager.get_child() if run_manager else None,
**kwargs,
)
if new_argument_supported
else await self.coroutine(*args, **kwargs)
)
return await asyncio.get_running_loop().run_in_executor(
None,
self._run,
partial(self._run, run_manager=run_manager, **kwargs),
*args,
)
@classmethod
def from_function( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | cls,
func: Optional[Callable] = None,
coroutine: Optional[Callable[..., Awaitable[Any]]] = None,
name: Optional[str] = None,
description: Optional[str] = None,
return_direct: bool = False,
args_schema: Optional[Type[BaseModel]] = None,
infer_schema: bool = True,
**kwargs: Any,
) -> StructuredTool:
"""Create tool from a given function.
A classmethod that helps to create a tool from a function.
Args:
func: The function from which to create a tool
coroutine: The async function from which to create a tool
name: The name of the tool. Defaults to the function name
description: The description of the tool. Defaults to the function docstring
return_direct: Whether to return the result directly or as a callback
args_schema: The schema of the tool's input arguments
infer_schema: Whether to infer the schema from the function's signature
**kwargs: Additional arguments to pass to the tool
Returns:
The tool
Examples:
.. code-block:: python
def add(a: int, b: int) -> int:
\"\"\"Add two numbers\"\"\"
return a + b
tool = StructuredTool.from_function(add)
tool.run(1, 2) # 3 |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | """
if func is not None:
source_function = func
elif coroutine is not None:
source_function = coroutine
else:
raise ValueError("Function and/or coroutine must be provided")
name = name or source_function.__name__
description = description or source_function.__doc__
if description is None:
raise ValueError(
"Function must have a docstring if description not provided."
)
sig = signature(source_function)
description = f"{name}{sig} - {description.strip()}"
_args_schema = args_schema
if _args_schema is None and infer_schema:
_args_schema = create_schema_from_function(f"{name}Schema", source_function)
return cls(
name=name,
func=func,
coroutine=coroutine,
args_schema=_args_schema,
description=description,
return_direct=return_direct,
**kwargs,
)
def tool( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | *args: Union[str, Callable],
return_direct: bool = False,
args_schema: Optional[Type[BaseModel]] = None,
infer_schema: bool = True,
) -> Callable:
"""Make tools out of functions, can be used with or without arguments.
Args:
*args: The arguments to the tool.
return_direct: Whether to return directly from the tool rather
than continuing the agent loop.
args_schema: optional argument schema for user to specify
infer_schema: Whether to infer the schema of the arguments from
the function's signature. This also makes the resultant tool
accept a dictionary input to its `run()` function.
Requires:
- Function must be of type (str) -> str
- Function must have a docstring
Examples:
.. code-block:: python
@tool
def search_api(query: str) -> str:
# Searches the API for the query.
return
@tool("search", return_direct=True)
def search_api(query: str) -> str:
# Searches the API for the query.
return
"""
def _make_with_name(tool_name: str) -> Callable: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | def _make_tool(dec_func: Callable) -> BaseTool:
if inspect.iscoroutinefunction(dec_func):
coroutine = dec_func
func = None
else:
coroutine = None
func = dec_func
if infer_schema or args_schema is not None:
return StructuredTool.from_function(
func,
coroutine,
name=tool_name,
return_direct=return_direct,
args_schema=args_schema,
infer_schema=infer_schema,
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,080 | StructuredTool ainvoke isn't await parent class ainvoke | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | https://github.com/langchain-ai/langchain/issues/10080 | https://github.com/langchain-ai/langchain/pull/10300 | fdba711d28375e86b23cfbad10a17feb67276ef5 | 28de8d132c8c4f7ecfe246c61375d91a04ff0abf | "2023-09-01T07:36:50Z" | python | "2023-09-08T02:54:53Z" | libs/langchain/langchain/tools/base.py | if func.__doc__ is None:
raise ValueError(
"Function must have a docstring if "
"description not provided and infer_schema is False."
)
return Tool(
name=tool_name,
func=func,
description=f"{tool_name} tool",
return_direct=return_direct,
coroutine=coroutine,
)
return _make_tool
if len(args) == 1 and isinstance(args[0], str):
return _make_with_name(args[0])
elif len(args) == 1 and callable(args[0]):
return _make_with_name(args[0].__name__)(args[0])
elif len(args) == 0:
def _partial(func: Callable[[str], str]) -> BaseTool:
return _make_with_name(func.__name__)(func)
return _partial
else:
raise ValueError("Too many arguments for tool decorator") |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | "2023-09-08T16:43:51Z" | python | "2023-09-08T22:53:08Z" | libs/langchain/langchain/prompts/chat.py | """Chat prompt template."""
from __future__ import annotations
from abc import ABC, abstractmethod
from pathlib import Path
from typing import (
Any,
Callable,
Dict, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | "2023-09-08T16:43:51Z" | python | "2023-09-08T22:53:08Z" | libs/langchain/langchain/prompts/chat.py | List,
Sequence,
Set,
Tuple,
Type,
TypeVar,
Union,
overload,
)
from langchain._api import deprecated
from langchain.load.serializable import Serializable
from langchain.prompts.base import StringPromptTemplate
from langchain.prompts.prompt import PromptTemplate
from langchain.pydantic_v1 import Field, root_validator
from langchain.schema import (
BasePromptTemplate,
PromptValue,
)
from langchain.schema.messages import (
AIMessage,
BaseMessage,
ChatMessage,
HumanMessage,
SystemMessage,
get_buffer_string,
)
class BaseMessagePromptTemplate(Serializable, ABC):
"""Base class for message prompt templates."""
@property
def lc_serializable(self) -> bool: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | "2023-09-08T16:43:51Z" | python | "2023-09-08T22:53:08Z" | libs/langchain/langchain/prompts/chat.py | """Whether this object should be serialized.
Returns:
Whether this object should be serialized.
"""
return True
@abstractmethod
def format_messages(self, **kwargs: Any) -> List[BaseMessage]:
"""Format messages from kwargs. Should return a list of BaseMessages.
Args:
**kwargs: Keyword arguments to use for formatting.
Returns:
List of BaseMessages.
"""
@property
@abstractmethod
def input_variables(self) -> List[str]:
"""Input variables for this prompt template.
Returns:
List of input variables.
"""
def __add__(self, other: Any) -> ChatPromptTemplate:
"""Combine two prompt templates.
Args:
other: Another prompt template.
Returns:
Combined prompt template.
"""
prompt = ChatPromptTemplate(messages=[self])
return prompt + other
class MessagesPlaceholder(BaseMessagePromptTemplate): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | "2023-09-08T16:43:51Z" | python | "2023-09-08T22:53:08Z" | libs/langchain/langchain/prompts/chat.py | """Prompt template that assumes variable is already list of messages."""
variable_name: str
"""Name of variable to use as messages."""
def format_messages(self, **kwargs: Any) -> List[BaseMessage]:
"""Format messages from kwargs.
Args:
**kwargs: Keyword arguments to use for formatting.
Returns:
List of BaseMessage.
"""
value = kwargs[self.variable_name]
if not isinstance(value, list):
raise ValueError(
f"variable {self.variable_name} should be a list of base messages, "
f"got {value}"
)
for v in value:
if not isinstance(v, BaseMessage):
raise ValueError(
f"variable {self.variable_name} should be a list of base messages,"
f" got {value}"
)
return value
@property
def input_variables(self) -> List[str]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | "2023-09-08T16:43:51Z" | python | "2023-09-08T22:53:08Z" | libs/langchain/langchain/prompts/chat.py | """Input variables for this prompt template.
Returns:
List of input variable names.
"""
return [self.variable_name]
MessagePromptTemplateT = TypeVar(
"MessagePromptTemplateT", bound="BaseStringMessagePromptTemplate"
)
"""Type variable for message prompt templates."""
class BaseStringMessagePromptTemplate(BaseMessagePromptTemplate, ABC): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | "2023-09-08T16:43:51Z" | python | "2023-09-08T22:53:08Z" | libs/langchain/langchain/prompts/chat.py | """Base class for message prompt templates that use a string prompt template."""
prompt: StringPromptTemplate
"""String prompt template."""
additional_kwargs: dict = Field(default_factory=dict)
"""Additional keyword arguments to pass to the prompt template."""
@classmethod
def from_template(
cls: Type[MessagePromptTemplateT],
template: str,
template_format: str = "f-string",
**kwargs: Any,
) -> MessagePromptTemplateT:
"""Create a class from a string template.
Args:
template: a template.
template_format: format of the template.
**kwargs: keyword arguments to pass to the constructor.
Returns:
A new instance of this class.
"""
prompt = PromptTemplate.from_template(template, template_format=template_format)
return cls(prompt=prompt, **kwargs)
@classmethod
def from_template_file( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | "2023-09-08T16:43:51Z" | python | "2023-09-08T22:53:08Z" | libs/langchain/langchain/prompts/chat.py | cls: Type[MessagePromptTemplateT],
template_file: Union[str, Path],
input_variables: List[str],
**kwargs: Any,
) -> MessagePromptTemplateT:
"""Create a class from a template file.
Args:
template_file: path to a template file. String or Path.
input_variables: list of input variables.
**kwargs: keyword arguments to pass to the constructor.
Returns:
A new instance of this class.
"""
prompt = PromptTemplate.from_file(template_file, input_variables)
return cls(prompt=prompt, **kwargs)
@abstractmethod
def format(self, **kwargs: Any) -> BaseMessage: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | "2023-09-08T16:43:51Z" | python | "2023-09-08T22:53:08Z" | libs/langchain/langchain/prompts/chat.py | """Format the prompt template.
Args:
**kwargs: Keyword arguments to use for formatting.
Returns:
Formatted message.
"""
def format_messages(self, **kwargs: Any) -> List[BaseMessage]:
"""Format messages from kwargs.
Args:
**kwargs: Keyword arguments to use for formatting.
Returns:
List of BaseMessages.
"""
return [self.format(**kwargs)]
@property
def input_variables(self) -> List[str]:
"""
Input variables for this prompt template.
Returns:
List of input variable names.
"""
return self.prompt.input_variables
class ChatMessagePromptTemplate(BaseStringMessagePromptTemplate): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | "2023-09-08T16:43:51Z" | python | "2023-09-08T22:53:08Z" | libs/langchain/langchain/prompts/chat.py | """Chat message prompt template."""
role: str
"""Role of the message."""
def format(self, **kwargs: Any) -> BaseMessage:
"""Format the prompt template.
Args:
**kwargs: Keyword arguments to use for formatting.
Returns:
Formatted message.
"""
text = self.prompt.format(**kwargs)
return ChatMessage(
content=text, role=self.role, additional_kwargs=self.additional_kwargs
)
class HumanMessagePromptTemplate(BaseStringMessagePromptTemplate): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | "2023-09-08T16:43:51Z" | python | "2023-09-08T22:53:08Z" | libs/langchain/langchain/prompts/chat.py | """Human message prompt template. This is a message that is sent to the user."""
def format(self, **kwargs: Any) -> BaseMessage:
"""Format the prompt template.
Args:
**kwargs: Keyword arguments to use for formatting.
Returns:
Formatted message.
"""
text = self.prompt.format(**kwargs)
return HumanMessage(content=text, additional_kwargs=self.additional_kwargs)
class AIMessagePromptTemplate(BaseStringMessagePromptTemplate):
"""AI message prompt template. This is a message that is not sent to the user."""
def format(self, **kwargs: Any) -> BaseMessage:
"""Format the prompt template.
Args:
**kwargs: Keyword arguments to use for formatting.
Returns:
Formatted message.
"""
text = self.prompt.format(**kwargs)
return AIMessage(content=text, additional_kwargs=self.additional_kwargs)
class SystemMessagePromptTemplate(BaseStringMessagePromptTemplate): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | "2023-09-08T16:43:51Z" | python | "2023-09-08T22:53:08Z" | libs/langchain/langchain/prompts/chat.py | """System message prompt template.
This is a message that is not sent to the user.
"""
def format(self, **kwargs: Any) -> BaseMessage:
"""Format the prompt template.
Args:
**kwargs: Keyword arguments to use for formatting.
Returns:
Formatted message.
"""
text = self.prompt.format(**kwargs)
return SystemMessage(content=text, additional_kwargs=self.additional_kwargs)
class ChatPromptValue(PromptValue):
"""Chat prompt value.
A type of a prompt value that is built from messages.
"""
messages: List[BaseMessage]
"""List of messages."""
def to_string(self) -> str:
"""Return prompt as string."""
return get_buffer_string(self.messages)
def to_messages(self) -> List[BaseMessage]:
"""Return prompt as a list of messages."""
return self.messages
class BaseChatPromptTemplate(BasePromptTemplate, ABC): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | "2023-09-08T16:43:51Z" | python | "2023-09-08T22:53:08Z" | libs/langchain/langchain/prompts/chat.py | """Base class for chat prompt templates."""
@property
def lc_attributes(self) -> Dict:
"""
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
"""
return {"input_variables": self.input_variables}
def format(self, **kwargs: Any) -> str:
"""Format the chat template into a string.
Args:
**kwargs: keyword arguments to use for filling in template variables
in all the template messages in this chat template.
Returns:
formatted string
"""
return self.format_prompt(**kwargs).to_string()
def format_prompt(self, **kwargs: Any) -> PromptValue:
"""
Format prompt. Should return a PromptValue.
Args:
**kwargs: Keyword arguments to use for formatting.
Returns:
PromptValue.
"""
messages = self.format_messages(**kwargs)
return ChatPromptValue(messages=messages)
@abstractmethod
def format_messages(self, **kwargs: Any) -> List[BaseMessage]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | "2023-09-08T16:43:51Z" | python | "2023-09-08T22:53:08Z" | libs/langchain/langchain/prompts/chat.py | """Format kwargs into a list of messages."""
MessageLike = Union[BaseMessagePromptTemplate, BaseMessage, BaseChatPromptTemplate]
MessageLikeRepresentation = Union[
MessageLike,
Tuple[str, str],
Tuple[Type, str],
str,
]
class ChatPromptTemplate(BaseChatPromptTemplate):
"""A prompt template for chat models.
Use to create flexible templated prompts for chat models.
Examples:
.. code-block:: python
from langchain.prompts import ChatPromptTemplate
template = ChatPromptTemplate.from_messages([
("system", "You are a helpful AI bot. Your name is {name}."),
("human", "Hello, how are you doing?"),
("ai", "I'm doing well, thanks!"),
("human", "{user_input}"),
])
messages = template.format_messages(
name="Bob",
user_input="What is your name?"
)
"""
input_variables: List[str]
"""List of input variables in template messages. Used for validation."""
messages: List[MessageLike]
"""List of messages consisting of either message prompt templates or messages."""
def __add__(self, other: Any) -> ChatPromptTemplate: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | "2023-09-08T16:43:51Z" | python | "2023-09-08T22:53:08Z" | libs/langchain/langchain/prompts/chat.py | """Combine two prompt templates.
Args:
other: Another prompt template.
Returns:
Combined prompt template.
"""
if isinstance(other, ChatPromptTemplate):
return ChatPromptTemplate(messages=self.messages + other.messages)
elif isinstance(
other, (BaseMessagePromptTemplate, BaseMessage, BaseChatPromptTemplate)
):
return ChatPromptTemplate(messages=self.messages + [other])
elif isinstance(other, (list, tuple)):
_other = ChatPromptTemplate.from_messages(other)
return ChatPromptTemplate(messages=self.messages + _other.messages)
elif isinstance(other, str):
prompt = HumanMessagePromptTemplate.from_template(other)
return ChatPromptTemplate(messages=self.messages + [prompt])
else:
raise NotImplementedError(f"Unsupported operand type for +: {type(other)}")
@root_validator(pre=True)
def validate_input_variables(cls, values: dict) -> dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | "2023-09-08T16:43:51Z" | python | "2023-09-08T22:53:08Z" | libs/langchain/langchain/prompts/chat.py | """Validate input variables.
If input_variables is not set, it will be set to the union of
all input variables in the messages.
Args:
values: values to validate.
Returns:
Validated values.
"""
messages = values["messages"]
input_vars = set()
for message in messages:
if isinstance(message, (BaseMessagePromptTemplate, BaseChatPromptTemplate)):
input_vars.update(message.input_variables)
if "partial_variables" in values:
input_vars = input_vars - set(values["partial_variables"])
if "input_variables" in values:
if input_vars != set(values["input_variables"]):
raise ValueError(
"Got mismatched input_variables. "
f"Expected: {input_vars}. "
f"Got: {values['input_variables']}"
)
else:
values["input_variables"] = sorted(input_vars)
return values
@classmethod
def from_template(cls, template: str, **kwargs: Any) -> ChatPromptTemplate: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | "2023-09-08T16:43:51Z" | python | "2023-09-08T22:53:08Z" | libs/langchain/langchain/prompts/chat.py | """Create a chat prompt template from a template string.
Creates a chat template consisting of a single message assumed to be from
the human.
Args:
template: template string
**kwargs: keyword arguments to pass to the constructor.
Returns:
A new instance of this class.
"""
prompt_template = PromptTemplate.from_template(template, **kwargs)
message = HumanMessagePromptTemplate(prompt=prompt_template)
return cls.from_messages([message])
@classmethod
@deprecated("0.0.260", alternative="from_messages classmethod", pending=True)
def from_role_strings( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | "2023-09-08T16:43:51Z" | python | "2023-09-08T22:53:08Z" | libs/langchain/langchain/prompts/chat.py | cls, string_messages: List[Tuple[str, str]]
) -> ChatPromptTemplate:
"""Create a chat prompt template from a list of (role, template) tuples.
Args:
string_messages: list of (role, template) tuples.
Returns:
a chat prompt template
"""
return cls(
messages=[
ChatMessagePromptTemplate.from_template(template, role=role)
for role, template in string_messages
]
)
@classmethod
@deprecated("0.0.260", alternative="from_messages classmethod", pending=True)
def from_strings( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | "2023-09-08T16:43:51Z" | python | "2023-09-08T22:53:08Z" | libs/langchain/langchain/prompts/chat.py | cls, string_messages: List[Tuple[Type[BaseMessagePromptTemplate], str]]
) -> ChatPromptTemplate:
"""Create a chat prompt template from a list of (role class, template) tuples.
Args:
string_messages: list of (role class, template) tuples.
Returns:
a chat prompt template
"""
return cls.from_messages(string_messages)
@classmethod
def from_messages(
cls,
messages: Sequence[MessageLikeRepresentation],
) -> ChatPromptTemplate:
"""Create a chat prompt template from a variety of message formats.
Examples:
Instantiation from a list of message templates:
.. code-block:: python
template = ChatPromptTemplate.from_messages([
("human", "Hello, how are you?"),
("ai", "I'm doing well, thanks!"),
("human", "That's good to hear."),
])
Instantiation from mixed message formats:
.. code-block:: python
template = ChatPromptTemplate.from_messages([
SystemMessage(content="hello"),
("human", "Hello, how are you?"), |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | "2023-09-08T16:43:51Z" | python | "2023-09-08T22:53:08Z" | libs/langchain/langchain/prompts/chat.py | ])
Args:
messages: sequence of message representations.
A message can be represented using the following formats:
(1) BaseMessagePromptTemplate, (2) BaseMessage, (3) 2-tuple of
(message type, template); e.g., ("human", "{user_input}"),
(4) 2-tuple of (message class, template), (4) a string which is
shorthand for ("human", template); e.g., "{user_input}"
Returns:
a chat prompt template
"""
_messages = [_convert_to_message(message) for message in messages]
input_vars: Set[str] = set()
for _message in _messages:
if isinstance(
_message, (BaseChatPromptTemplate, BaseMessagePromptTemplate)
):
input_vars.update(_message.input_variables)
return cls(input_variables=sorted(input_vars), messages=_messages)
def format(self, **kwargs: Any) -> str:
"""Format the chat template into a string.
Args:
**kwargs: keyword arguments to use for filling in template variables
in all the template messages in this chat template.
Returns:
formatted string
"""
return self.format_prompt(**kwargs).to_string()
def format_messages(self, **kwargs: Any) -> List[BaseMessage]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | "2023-09-08T16:43:51Z" | python | "2023-09-08T22:53:08Z" | libs/langchain/langchain/prompts/chat.py | """Format the chat template into a list of finalized messages.
Args:
**kwargs: keyword arguments to use for filling in template variables
in all the template messages in this chat template.
Returns:
list of formatted messages
"""
kwargs = self._merge_partial_and_user_variables(**kwargs)
result = []
for message_template in self.messages:
if isinstance(message_template, BaseMessage):
result.extend([message_template])
elif isinstance(
message_template, (BaseMessagePromptTemplate, BaseChatPromptTemplate)
):
rel_params = {
k: v
for k, v in kwargs.items()
if k in message_template.input_variables
}
message = message_template.format_messages(**rel_params)
result.extend(message)
else:
raise ValueError(f"Unexpected input: {message_template}")
return result
def partial(self, **kwargs: Union[str, Callable[[], str]]) -> ChatPromptTemplate: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | "2023-09-08T16:43:51Z" | python | "2023-09-08T22:53:08Z" | libs/langchain/langchain/prompts/chat.py | """Get a new ChatPromptTemplate with some input variables already filled in.
Args:
**kwargs: keyword arguments to use for filling in template variables. Ought
to be a subset of the input variables.
Returns:
A new ChatPromptTemplate.
Example:
.. code-block:: python
from langchain.prompts import ChatPromptTemplate
template = ChatPromptTemplate.from_messages(
[
("system", "You are an AI assistant named {name}."),
("human", "Hi I'm {user}"),
("ai", "Hi there, {user}, I'm {name}."),
("human", "{input}"),
]
)
template2 = template.partial(user="Lucy", name="R2D2")
template2.format_messages(input="hello")
"""
prompt_dict = self.__dict__.copy()
prompt_dict["input_variables"] = list(
set(self.input_variables).difference(kwargs)
)
prompt_dict["partial_variables"] = {**self.partial_variables, **kwargs}
return type(self)(**prompt_dict)
def append(self, message: MessageLikeRepresentation) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | "2023-09-08T16:43:51Z" | python | "2023-09-08T22:53:08Z" | libs/langchain/langchain/prompts/chat.py | """Append message to the end of the chat template.
Args:
message: representation of a message to append.
"""
self.messages.append(_convert_to_message(message))
def extend(self, messages: Sequence[MessageLikeRepresentation]) -> None:
"""Extend the chat template with a sequence of messages."""
self.messages.extend([_convert_to_message(message) for message in messages])
@overload
def __getitem__(self, index: int) -> MessageLike: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | "2023-09-08T16:43:51Z" | python | "2023-09-08T22:53:08Z" | libs/langchain/langchain/prompts/chat.py | ...
@overload
def __getitem__(self, index: slice) -> ChatPromptTemplate:
...
def __getitem__(
self, index: Union[int, slice]
) -> Union[MessageLike, ChatPromptTemplate]:
"""Use to index into the chat template."""
if isinstance(index, slice):
start, stop, step = index.indices(len(self.messages))
messages = self.messages[start:stop:step]
return ChatPromptTemplate.from_messages(messages)
else:
return self.messages[index]
def __len__(self) -> int:
"""Get the length of the chat template."""
return len(self.messages)
@property
def _prompt_type(self) -> str:
"""Name of prompt type."""
return "chat"
def save(self, file_path: Union[Path, str]) -> None:
"""Save prompt to file.
Args:
file_path: path to file.
"""
raise NotImplementedError()
def _create_template_from_message_type( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | "2023-09-08T16:43:51Z" | python | "2023-09-08T22:53:08Z" | libs/langchain/langchain/prompts/chat.py | message_type: str, template: str
) -> BaseMessagePromptTemplate:
"""Create a message prompt template from a message type and template string.
Args:
message_type: str the type of the message template (e.g., "human", "ai", etc.)
template: str the template string.
Returns:
a message prompt template of the appropriate type.
"""
if message_type in ("human", "user"):
message: BaseMessagePromptTemplate = HumanMessagePromptTemplate.from_template(
template
)
elif message_type in ("ai", "assistant"):
message = AIMessagePromptTemplate.from_template(template)
elif message_type == "system":
message = SystemMessagePromptTemplate.from_template(template)
else:
raise ValueError(
f"Unexpected message type: {message_type}. Use one of 'human',"
f" 'user', 'ai', 'assistant', or 'system'."
)
return message
def _convert_to_message(
message: MessageLikeRepresentation,
) -> Union[BaseMessage, BaseMessagePromptTemplate, BaseChatPromptTemplate]:
"""Instantiate a message from a variety of message formats.
The message format can be one of the following: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,378 | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | https://github.com/langchain-ai/langchain/issues/10378 | https://github.com/langchain-ai/langchain/pull/10379 | 8c0f391815eac61f2b5d1b993e9bc4795808696f | c902a1545bfbc3015defcd1c3ee435d38db4ee34 | "2023-09-08T16:43:51Z" | python | "2023-09-08T22:53:08Z" | libs/langchain/langchain/prompts/chat.py | - BaseMessagePromptTemplate
- BaseMessage
- 2-tuple of (role string, template); e.g., ("human", "{user_input}")
- 2-tuple of (message class, template)
- string: shorthand for ("human", template); e.g., "{user_input}"
Args:
message: a representation of a message in one of the supported formats
Returns:
an instance of a message or a message template
"""
if isinstance(message, (BaseMessagePromptTemplate, BaseChatPromptTemplate)):
_message: Union[
BaseMessage, BaseMessagePromptTemplate, BaseChatPromptTemplate
] = message
elif isinstance(message, BaseMessage):
_message = message
elif isinstance(message, str):
_message = _create_template_from_message_type("human", message)
elif isinstance(message, tuple):
if len(message) != 2:
raise ValueError(f"Expected 2-tuple of (role, template), got {message}")
message_type_str, template = message
if isinstance(message_type_str, str):
_message = _create_template_from_message_type(message_type_str, template)
else:
_message = message_type_str(prompt=PromptTemplate.from_template(template))
else:
raise NotImplementedError(f"Unsupported message type: {type(message)}")
return _message |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,186 | Issue: RedisVectorStoreRetriever not accessible | ### Issue you'd like to raise.
After PR [#8612](https://github.com/langchain-ai/langchain/pull/8612), access to [RedisVectorStoreRetriever](https://github.com/langchain-ai/langchain/blob/27944cb611ee8face34fbe764c83e37841f96eb7/libs/langchain/langchain/vectorstores/redis/base.py#L1293) has been removed
### Suggestion:
Include **RedisVectorStoreRetriever** import in [redis/__init__.py](https://github.com/langchain-ai/langchain/blob/27944cb611ee8face34fbe764c83e37841f96eb7/libs/langchain/langchain/vectorstores/redis/__init__.py) on line 1
current: `from .base import Redis`
suggestion update: `from .base import Redis, RedisVectorStoreRetriever`
| https://github.com/langchain-ai/langchain/issues/10186 | https://github.com/langchain-ai/langchain/pull/10414 | d09ef9eb52466f991fc155567f234e5351f20d06 | 65e1606daa696e2190fcb410f190c6811f9f8dc3 | "2023-09-04T14:21:34Z" | python | "2023-09-10T00:46:34Z" | libs/langchain/langchain/vectorstores/redis/__init__.py | from .base import Redis
from .filters import (
RedisFilter,
RedisNum,
RedisTag,
RedisText,
)
__all__ = ["Redis", "RedisFilter", "RedisTag", "RedisText", "RedisNum"] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,378 | GCP Matching Engine support for public index endpoints | ### System Info
langchain==0.0.244
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a new Matching Engine Index Endpoint that is public.
Follow the tutorial to make a similarity search:
```
vector_store = MatchingEngine.from_components(
project_id="",
region="us-central1",
gcs_bucket_name="",
index_id="",
endpoint_id="",
embedding=embeddings,
)
vector_store.similarity_search("what is a cat?", k=5)
```
Error:
```
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:1030, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
1021 def __call__(self,
1022 request: Any,
1023 timeout: Optional[float] = None,
(...)
1026 wait_for_ready: Optional[bool] = None,
1027 compression: Optional[grpc.Compression] = None) -> Any:
1028 state, call, = self._blocking(request, timeout, metadata, credentials,
1029 wait_for_ready, compression)
-> 1030 return _end_unary_response_blocking(state, call, False, None)
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:910, in _end_unary_response_blocking(state, call, with_call, deadline)
908 return state.response
909 else:
--> 910 raise _InactiveRpcError(state)
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "DNS resolution failed for :10000: unparseable host:port"
debug_error_string = "UNKNOWN:DNS resolution failed for :10000: unparseable host:port {created_time:"2023-07-27T20:12:23.727315699+00:00", grpc_status:14}"
>
```
### Expected behavior
It should be possible to do this. The VertexAI Python SDK supports it with the `endpoint.find_neighbors` function.
I think just changing [the wrapper](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/matching_engine.py#L178) from `.match` to `.find_neighbors` for when the endpoint is public should do it.
| https://github.com/langchain-ai/langchain/issues/8378 | https://github.com/langchain-ai/langchain/pull/10056 | 4f19ba306597eb753ea397d4b646dc75c2668cbe | 21b236e5e4fc5c6e22bab61967b6e56895c4fa15 | "2023-07-27T20:14:21Z" | python | "2023-09-19T23:16:04Z" | libs/langchain/langchain/vectorstores/matching_engine.py | from __future__ import annotations
import json
import logging
import time
import uuid
from typing import TYPE_CHECKING, Any, Iterable, List, Optional, Type
from langchain.docstore.document import Document
from langchain.embeddings import TensorflowHubEmbeddings
from langchain.schema.embeddings import Embeddings
from langchain.vectorstores.base import VectorStore
if TYPE_CHECKING:
from google.cloud import storage
from google.cloud.aiplatform import MatchingEngineIndex, MatchingEngineIndexEndpoint
from google.oauth2.service_account import Credentials
logger = logging.getLogger()
class MatchingEngine(VectorStore): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,378 | GCP Matching Engine support for public index endpoints | ### System Info
langchain==0.0.244
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a new Matching Engine Index Endpoint that is public.
Follow the tutorial to make a similarity search:
```
vector_store = MatchingEngine.from_components(
project_id="",
region="us-central1",
gcs_bucket_name="",
index_id="",
endpoint_id="",
embedding=embeddings,
)
vector_store.similarity_search("what is a cat?", k=5)
```
Error:
```
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:1030, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
1021 def __call__(self,
1022 request: Any,
1023 timeout: Optional[float] = None,
(...)
1026 wait_for_ready: Optional[bool] = None,
1027 compression: Optional[grpc.Compression] = None) -> Any:
1028 state, call, = self._blocking(request, timeout, metadata, credentials,
1029 wait_for_ready, compression)
-> 1030 return _end_unary_response_blocking(state, call, False, None)
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:910, in _end_unary_response_blocking(state, call, with_call, deadline)
908 return state.response
909 else:
--> 910 raise _InactiveRpcError(state)
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "DNS resolution failed for :10000: unparseable host:port"
debug_error_string = "UNKNOWN:DNS resolution failed for :10000: unparseable host:port {created_time:"2023-07-27T20:12:23.727315699+00:00", grpc_status:14}"
>
```
### Expected behavior
It should be possible to do this. The VertexAI Python SDK supports it with the `endpoint.find_neighbors` function.
I think just changing [the wrapper](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/matching_engine.py#L178) from `.match` to `.find_neighbors` for when the endpoint is public should do it.
| https://github.com/langchain-ai/langchain/issues/8378 | https://github.com/langchain-ai/langchain/pull/10056 | 4f19ba306597eb753ea397d4b646dc75c2668cbe | 21b236e5e4fc5c6e22bab61967b6e56895c4fa15 | "2023-07-27T20:14:21Z" | python | "2023-09-19T23:16:04Z" | libs/langchain/langchain/vectorstores/matching_engine.py | """`Google Vertex AI Matching Engine` vector store.
While the embeddings are stored in the Matching Engine, the embedded
documents will be stored in GCS.
An existing Index and corresponding Endpoint are preconditions for
using this module.
See usage in docs/modules/indexes/vectorstores/examples/matchingengine.ipynb
Note that this implementation is mostly meant for reading if you are
planning to do a real time implementation. While reading is a real time
operation, updating the index takes close to one hour."""
def __init__(
self,
project_id: str,
index: MatchingEngineIndex,
endpoint: MatchingEngineIndexEndpoint,
embedding: Embeddings,
gcs_client: storage.Client,
gcs_bucket_name: str,
credentials: Optional[Credentials] = None,
):
"""Vertex Matching Engine implementation of the vector store.
While the embeddings are stored in the Matching Engine, the embedded
documents will be stored in GCS.
An existing Index and corresponding Endpoint are preconditions for
using this module.
See usage in |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,378 | GCP Matching Engine support for public index endpoints | ### System Info
langchain==0.0.244
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a new Matching Engine Index Endpoint that is public.
Follow the tutorial to make a similarity search:
```
vector_store = MatchingEngine.from_components(
project_id="",
region="us-central1",
gcs_bucket_name="",
index_id="",
endpoint_id="",
embedding=embeddings,
)
vector_store.similarity_search("what is a cat?", k=5)
```
Error:
```
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:1030, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
1021 def __call__(self,
1022 request: Any,
1023 timeout: Optional[float] = None,
(...)
1026 wait_for_ready: Optional[bool] = None,
1027 compression: Optional[grpc.Compression] = None) -> Any:
1028 state, call, = self._blocking(request, timeout, metadata, credentials,
1029 wait_for_ready, compression)
-> 1030 return _end_unary_response_blocking(state, call, False, None)
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:910, in _end_unary_response_blocking(state, call, with_call, deadline)
908 return state.response
909 else:
--> 910 raise _InactiveRpcError(state)
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "DNS resolution failed for :10000: unparseable host:port"
debug_error_string = "UNKNOWN:DNS resolution failed for :10000: unparseable host:port {created_time:"2023-07-27T20:12:23.727315699+00:00", grpc_status:14}"
>
```
### Expected behavior
It should be possible to do this. The VertexAI Python SDK supports it with the `endpoint.find_neighbors` function.
I think just changing [the wrapper](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/matching_engine.py#L178) from `.match` to `.find_neighbors` for when the endpoint is public should do it.
| https://github.com/langchain-ai/langchain/issues/8378 | https://github.com/langchain-ai/langchain/pull/10056 | 4f19ba306597eb753ea397d4b646dc75c2668cbe | 21b236e5e4fc5c6e22bab61967b6e56895c4fa15 | "2023-07-27T20:14:21Z" | python | "2023-09-19T23:16:04Z" | libs/langchain/langchain/vectorstores/matching_engine.py | docs/modules/indexes/vectorstores/examples/matchingengine.ipynb.
Note that this implementation is mostly meant for reading if you are
planning to do a real time implementation. While reading is a real time
operation, updating the index takes close to one hour.
Attributes:
project_id: The GCS project id.
index: The created index class. See
~:func:`MatchingEngine.from_components`.
endpoint: The created endpoint class. See
~:func:`MatchingEngine.from_components`.
embedding: A :class:`Embeddings` that will be used for
embedding the text sent. If none is sent, then the
multilingual Tensorflow Universal Sentence Encoder will be used.
gcs_client: The GCS client.
gcs_bucket_name: The GCS bucket name.
credentials (Optional): Created GCP credentials.
"""
super().__init__()
self._validate_google_libraries_installation()
self.project_id = project_id
self.index = index
self.endpoint = endpoint
self.embedding = embedding
self.gcs_client = gcs_client
self.credentials = credentials
self.gcs_bucket_name = gcs_bucket_name
@property
def embeddings(self) -> Embeddings:
return self.embedding
def _validate_google_libraries_installation(self) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,378 | GCP Matching Engine support for public index endpoints | ### System Info
langchain==0.0.244
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a new Matching Engine Index Endpoint that is public.
Follow the tutorial to make a similarity search:
```
vector_store = MatchingEngine.from_components(
project_id="",
region="us-central1",
gcs_bucket_name="",
index_id="",
endpoint_id="",
embedding=embeddings,
)
vector_store.similarity_search("what is a cat?", k=5)
```
Error:
```
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:1030, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
1021 def __call__(self,
1022 request: Any,
1023 timeout: Optional[float] = None,
(...)
1026 wait_for_ready: Optional[bool] = None,
1027 compression: Optional[grpc.Compression] = None) -> Any:
1028 state, call, = self._blocking(request, timeout, metadata, credentials,
1029 wait_for_ready, compression)
-> 1030 return _end_unary_response_blocking(state, call, False, None)
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:910, in _end_unary_response_blocking(state, call, with_call, deadline)
908 return state.response
909 else:
--> 910 raise _InactiveRpcError(state)
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "DNS resolution failed for :10000: unparseable host:port"
debug_error_string = "UNKNOWN:DNS resolution failed for :10000: unparseable host:port {created_time:"2023-07-27T20:12:23.727315699+00:00", grpc_status:14}"
>
```
### Expected behavior
It should be possible to do this. The VertexAI Python SDK supports it with the `endpoint.find_neighbors` function.
I think just changing [the wrapper](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/matching_engine.py#L178) from `.match` to `.find_neighbors` for when the endpoint is public should do it.
| https://github.com/langchain-ai/langchain/issues/8378 | https://github.com/langchain-ai/langchain/pull/10056 | 4f19ba306597eb753ea397d4b646dc75c2668cbe | 21b236e5e4fc5c6e22bab61967b6e56895c4fa15 | "2023-07-27T20:14:21Z" | python | "2023-09-19T23:16:04Z" | libs/langchain/langchain/vectorstores/matching_engine.py | """Validates that Google libraries that are needed are installed."""
try:
from google.cloud import aiplatform, storage
from google.oauth2 import service_account
except ImportError:
raise ImportError(
"You must run `pip install --upgrade "
"google-cloud-aiplatform google-cloud-storage`"
"to use the MatchingEngine Vectorstore."
)
def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts: Iterable of strings to add to the vectorstore.
metadatas: Optional list of metadatas associated with the texts.
kwargs: vectorstore specific parameters.
Returns:
List of ids from adding the texts into the vectorstore.
"""
logger.debug("Embedding documents.")
embeddings = self.embedding.embed_documents(list(texts))
jsons = []
ids = [] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,378 | GCP Matching Engine support for public index endpoints | ### System Info
langchain==0.0.244
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a new Matching Engine Index Endpoint that is public.
Follow the tutorial to make a similarity search:
```
vector_store = MatchingEngine.from_components(
project_id="",
region="us-central1",
gcs_bucket_name="",
index_id="",
endpoint_id="",
embedding=embeddings,
)
vector_store.similarity_search("what is a cat?", k=5)
```
Error:
```
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:1030, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
1021 def __call__(self,
1022 request: Any,
1023 timeout: Optional[float] = None,
(...)
1026 wait_for_ready: Optional[bool] = None,
1027 compression: Optional[grpc.Compression] = None) -> Any:
1028 state, call, = self._blocking(request, timeout, metadata, credentials,
1029 wait_for_ready, compression)
-> 1030 return _end_unary_response_blocking(state, call, False, None)
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:910, in _end_unary_response_blocking(state, call, with_call, deadline)
908 return state.response
909 else:
--> 910 raise _InactiveRpcError(state)
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "DNS resolution failed for :10000: unparseable host:port"
debug_error_string = "UNKNOWN:DNS resolution failed for :10000: unparseable host:port {created_time:"2023-07-27T20:12:23.727315699+00:00", grpc_status:14}"
>
```
### Expected behavior
It should be possible to do this. The VertexAI Python SDK supports it with the `endpoint.find_neighbors` function.
I think just changing [the wrapper](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/matching_engine.py#L178) from `.match` to `.find_neighbors` for when the endpoint is public should do it.
| https://github.com/langchain-ai/langchain/issues/8378 | https://github.com/langchain-ai/langchain/pull/10056 | 4f19ba306597eb753ea397d4b646dc75c2668cbe | 21b236e5e4fc5c6e22bab61967b6e56895c4fa15 | "2023-07-27T20:14:21Z" | python | "2023-09-19T23:16:04Z" | libs/langchain/langchain/vectorstores/matching_engine.py | for embedding, text in zip(embeddings, texts):
id = str(uuid.uuid4())
ids.append(id)
jsons.append({"id": id, "embedding": embedding})
self._upload_to_gcs(text, f"documents/{id}")
logger.debug(f"Uploaded {len(ids)} documents to GCS.")
result_str = "\n".join([json.dumps(x) for x in jsons])
filename_prefix = f"indexes/{uuid.uuid4()}"
filename = f"{filename_prefix}/{time.time()}.json"
self._upload_to_gcs(result_str, filename)
logger.debug(
f"Uploaded updated json with embeddings to "
f"{self.gcs_bucket_name}/{filename}."
)
self.index = self.index.update_embeddings(
contents_delta_uri=f"gs://{self.gcs_bucket_name}/{filename_prefix}/"
)
logger.debug("Updated index with new configuration.")
return ids
def _upload_to_gcs(self, data: str, gcs_location: str) -> None:
"""Uploads data to gcs_location.
Args:
data: The data that will be stored.
gcs_location: The location where the data will be stored.
"""
bucket = self.gcs_client.get_bucket(self.gcs_bucket_name)
blob = bucket.blob(gcs_location)
blob.upload_from_string(data)
def similarity_search( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,378 | GCP Matching Engine support for public index endpoints | ### System Info
langchain==0.0.244
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a new Matching Engine Index Endpoint that is public.
Follow the tutorial to make a similarity search:
```
vector_store = MatchingEngine.from_components(
project_id="",
region="us-central1",
gcs_bucket_name="",
index_id="",
endpoint_id="",
embedding=embeddings,
)
vector_store.similarity_search("what is a cat?", k=5)
```
Error:
```
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:1030, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
1021 def __call__(self,
1022 request: Any,
1023 timeout: Optional[float] = None,
(...)
1026 wait_for_ready: Optional[bool] = None,
1027 compression: Optional[grpc.Compression] = None) -> Any:
1028 state, call, = self._blocking(request, timeout, metadata, credentials,
1029 wait_for_ready, compression)
-> 1030 return _end_unary_response_blocking(state, call, False, None)
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:910, in _end_unary_response_blocking(state, call, with_call, deadline)
908 return state.response
909 else:
--> 910 raise _InactiveRpcError(state)
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "DNS resolution failed for :10000: unparseable host:port"
debug_error_string = "UNKNOWN:DNS resolution failed for :10000: unparseable host:port {created_time:"2023-07-27T20:12:23.727315699+00:00", grpc_status:14}"
>
```
### Expected behavior
It should be possible to do this. The VertexAI Python SDK supports it with the `endpoint.find_neighbors` function.
I think just changing [the wrapper](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/matching_engine.py#L178) from `.match` to `.find_neighbors` for when the endpoint is public should do it.
| https://github.com/langchain-ai/langchain/issues/8378 | https://github.com/langchain-ai/langchain/pull/10056 | 4f19ba306597eb753ea397d4b646dc75c2668cbe | 21b236e5e4fc5c6e22bab61967b6e56895c4fa15 | "2023-07-27T20:14:21Z" | python | "2023-09-19T23:16:04Z" | libs/langchain/langchain/vectorstores/matching_engine.py | self, query: str, k: int = 4, **kwargs: Any
) -> List[Document]:
"""Return docs most similar to query.
Args:
query: The string that will be used to search for similar documents.
k: The amount of neighbors that will be retrieved.
Returns:
A list of k matching documents.
"""
logger.debug(f"Embedding query {query}.")
embedding_query = self.embedding.embed_documents([query])
response = self.endpoint.match(
deployed_index_id=self._get_index_id(),
queries=embedding_query,
num_neighbors=k,
)
if len(response) == 0:
return []
logger.debug(f"Found {len(response)} matches for the query {query}.")
results = []
for doc in response[0]:
page_content = self._download_from_gcs(f"documents/{doc.id}")
results.append(Document(page_content=page_content))
logger.debug("Downloaded documents for query.")
return results
def _get_index_id(self) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,378 | GCP Matching Engine support for public index endpoints | ### System Info
langchain==0.0.244
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a new Matching Engine Index Endpoint that is public.
Follow the tutorial to make a similarity search:
```
vector_store = MatchingEngine.from_components(
project_id="",
region="us-central1",
gcs_bucket_name="",
index_id="",
endpoint_id="",
embedding=embeddings,
)
vector_store.similarity_search("what is a cat?", k=5)
```
Error:
```
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:1030, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
1021 def __call__(self,
1022 request: Any,
1023 timeout: Optional[float] = None,
(...)
1026 wait_for_ready: Optional[bool] = None,
1027 compression: Optional[grpc.Compression] = None) -> Any:
1028 state, call, = self._blocking(request, timeout, metadata, credentials,
1029 wait_for_ready, compression)
-> 1030 return _end_unary_response_blocking(state, call, False, None)
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:910, in _end_unary_response_blocking(state, call, with_call, deadline)
908 return state.response
909 else:
--> 910 raise _InactiveRpcError(state)
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "DNS resolution failed for :10000: unparseable host:port"
debug_error_string = "UNKNOWN:DNS resolution failed for :10000: unparseable host:port {created_time:"2023-07-27T20:12:23.727315699+00:00", grpc_status:14}"
>
```
### Expected behavior
It should be possible to do this. The VertexAI Python SDK supports it with the `endpoint.find_neighbors` function.
I think just changing [the wrapper](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/matching_engine.py#L178) from `.match` to `.find_neighbors` for when the endpoint is public should do it.
| https://github.com/langchain-ai/langchain/issues/8378 | https://github.com/langchain-ai/langchain/pull/10056 | 4f19ba306597eb753ea397d4b646dc75c2668cbe | 21b236e5e4fc5c6e22bab61967b6e56895c4fa15 | "2023-07-27T20:14:21Z" | python | "2023-09-19T23:16:04Z" | libs/langchain/langchain/vectorstores/matching_engine.py | """Gets the correct index id for the endpoint.
Returns:
The index id if found (which should be found) or throws
ValueError otherwise.
"""
for index in self.endpoint.deployed_indexes:
if index.index == self.index.resource_name:
return index.id
raise ValueError(
f"No index with id {self.index.resource_name} "
f"deployed on endpoint "
f"{self.endpoint.display_name}."
)
def _download_from_gcs(self, gcs_location: str) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,378 | GCP Matching Engine support for public index endpoints | ### System Info
langchain==0.0.244
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a new Matching Engine Index Endpoint that is public.
Follow the tutorial to make a similarity search:
```
vector_store = MatchingEngine.from_components(
project_id="",
region="us-central1",
gcs_bucket_name="",
index_id="",
endpoint_id="",
embedding=embeddings,
)
vector_store.similarity_search("what is a cat?", k=5)
```
Error:
```
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:1030, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
1021 def __call__(self,
1022 request: Any,
1023 timeout: Optional[float] = None,
(...)
1026 wait_for_ready: Optional[bool] = None,
1027 compression: Optional[grpc.Compression] = None) -> Any:
1028 state, call, = self._blocking(request, timeout, metadata, credentials,
1029 wait_for_ready, compression)
-> 1030 return _end_unary_response_blocking(state, call, False, None)
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:910, in _end_unary_response_blocking(state, call, with_call, deadline)
908 return state.response
909 else:
--> 910 raise _InactiveRpcError(state)
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "DNS resolution failed for :10000: unparseable host:port"
debug_error_string = "UNKNOWN:DNS resolution failed for :10000: unparseable host:port {created_time:"2023-07-27T20:12:23.727315699+00:00", grpc_status:14}"
>
```
### Expected behavior
It should be possible to do this. The VertexAI Python SDK supports it with the `endpoint.find_neighbors` function.
I think just changing [the wrapper](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/matching_engine.py#L178) from `.match` to `.find_neighbors` for when the endpoint is public should do it.
| https://github.com/langchain-ai/langchain/issues/8378 | https://github.com/langchain-ai/langchain/pull/10056 | 4f19ba306597eb753ea397d4b646dc75c2668cbe | 21b236e5e4fc5c6e22bab61967b6e56895c4fa15 | "2023-07-27T20:14:21Z" | python | "2023-09-19T23:16:04Z" | libs/langchain/langchain/vectorstores/matching_engine.py | """Downloads from GCS in text format.
Args:
gcs_location: The location where the file is located.
Returns:
The string contents of the file.
"""
bucket = self.gcs_client.get_bucket(self.gcs_bucket_name)
blob = bucket.blob(gcs_location)
return blob.download_as_string()
@classmethod
def from_texts(
cls: Type["MatchingEngine"],
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
**kwargs: Any,
) -> "MatchingEngine":
"""Use from components instead."""
raise NotImplementedError(
"This method is not implemented. Instead, you should initialize the class"
" with `MatchingEngine.from_components(...)` and then call "
"`add_texts`"
)
@classmethod
def from_components( |
Subsets and Splits