status
stringclasses
1 value
repo_name
stringclasses
31 values
repo_url
stringclasses
31 values
issue_id
int64
1
104k
title
stringlengths
4
233
body
stringlengths
0
186k
issue_url
stringlengths
38
56
pull_url
stringlengths
37
54
before_fix_sha
stringlengths
40
40
after_fix_sha
stringlengths
40
40
report_datetime
unknown
language
stringclasses
5 values
commit_datetime
unknown
updated_file
stringlengths
7
188
chunk_content
stringlengths
1
1.03M
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,943
OpenAIEmbeddings() does not work because of these bugs
### System Info Python Version: 3.11 LangChain Version: 0.0.331 OpenAI Version: 1.0.0 ### Who can help? @hwchase17, @agola11, @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The following error has been caused due to the recent change in version of OpenAI to 1.0.0 **Use OpenAI==0.28.1 to fix this error** With the code: `embeddings = OpenAIEmbeddings()` The error produced is: `AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?` I went through the `langchain/embeddings/openai.py` file and then changed `value["client"] = openai.Embedding` to `value["client"] = openai.embeddings`, but then I receive this new error: `AttributeError: module 'openai' has no attribute 'error'` in the same file (`langchain/embeddings/openai.py`) ### Expected behavior There should be no error when calling this function.
https://github.com/langchain-ai/langchain/issues/12943
https://github.com/langchain-ai/langchain/pull/12969
fdbb45d79e69485e0892dadf48b32dc8efadde9b
0c81cd923e04bb68fdf3ad299946d7fa85a21f9f
"2023-11-06T17:56:29Z"
python
"2023-11-07T02:52:33Z"
libs/langchain/tests/integration_tests/embeddings/test_openai.py
"""Test openai embeddings.""" document = "foo bar" embedding = OpenAIEmbeddings() output = embedding.embed_query(document) assert len(output) == 1536 @pytest.mark.scheduled @pytest.mark.asyncio async def test_openai_embedding_async_query() -> None: """Test openai embeddings.""" document = "foo bar" embedding = OpenAIEmbeddings() output = await embedding.aembed_query(document) assert len(output) == 1536 @pytest.mark.skip(reason="Unblock scheduled testing. TODO: fix.") @pytest.mark.scheduled def test_openai_embedding_with_empty_string() -> None: """Test openai embeddings with empty string.""" document = ["", "abc"] embedding = OpenAIEmbeddings() output = embedding.embed_documents(document) assert len(output) == 2 assert len(output[0]) == 1536 expected_output = openai.Embedding.create(input="", model="text-embedding-ada-002")[ "data" ][0]["embedding"] assert np.allclose(output[0], expected_output) assert len(output[1]) == 1536 @pytest.mark.scheduled def test_embed_documents_normalized() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,943
OpenAIEmbeddings() does not work because of these bugs
### System Info Python Version: 3.11 LangChain Version: 0.0.331 OpenAI Version: 1.0.0 ### Who can help? @hwchase17, @agola11, @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The following error has been caused due to the recent change in version of OpenAI to 1.0.0 **Use OpenAI==0.28.1 to fix this error** With the code: `embeddings = OpenAIEmbeddings()` The error produced is: `AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?` I went through the `langchain/embeddings/openai.py` file and then changed `value["client"] = openai.Embedding` to `value["client"] = openai.embeddings`, but then I receive this new error: `AttributeError: module 'openai' has no attribute 'error'` in the same file (`langchain/embeddings/openai.py`) ### Expected behavior There should be no error when calling this function.
https://github.com/langchain-ai/langchain/issues/12943
https://github.com/langchain-ai/langchain/pull/12969
fdbb45d79e69485e0892dadf48b32dc8efadde9b
0c81cd923e04bb68fdf3ad299946d7fa85a21f9f
"2023-11-06T17:56:29Z"
python
"2023-11-07T02:52:33Z"
libs/langchain/tests/integration_tests/embeddings/test_openai.py
output = OpenAIEmbeddings().embed_documents(["foo walked to the market"]) assert np.isclose(np.linalg.norm(output[0]), 1.0) @pytest.mark.scheduled def test_embed_query_normalized() -> None: output = OpenAIEmbeddings().embed_query("foo walked to the market") assert np.isclose(np.linalg.norm(output), 1.0) def test_azure_openai_embeddings() -> None: from openai import error os.environ["OPENAI_API_TYPE"] = "azure" os.environ["OPENAI_API_BASE"] = "https://your-endpoint.openai.azure.com/" os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key" os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview" embeddings = OpenAIEmbeddings(deployment="your-embeddings-deployment-name") text = "This is a test document." try: embeddings.embed_query(text) except error.InvalidRequestError as e: if "Must provide an 'engine' or 'deployment_id' parameter" in str(e): assert ( False ), "deployment was provided to but openai.Embeddings didn't get it." except Exception: pass
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,207
Issue: Pinecone retriever with Cosine Similarity is treated like Cosine Distance
### Issue you'd like to raise. When using document search from the existing Pinecone index that was created using Cosine **Similarity** in the `ConversationalRetrievalChain`, the `score_theshold` would eliminate most relevant documents instead of least relevant ones because the _similarity_ metric will be converted to _distance_. In [_select_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/pinecone.py#L172) it calls the [_cosine_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L169) - which converts the similarity returned from Pinecone search to distance. Then, [filtering the documents](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L266) based on the `score_threshold` eliminates the most relevant documents instead of least relevant ones. ### Suggestion: Pinecone subclass should override the `_cosine_relevance_score_fn` to preserve the similarity, since it is what originally comes back from the Pinecone similarity search.
https://github.com/langchain-ai/langchain/issues/8207
https://github.com/langchain-ai/langchain/pull/8920
2e42ed5de68d27fe0ce676aae0cdaae778fcf16c
ff19a62afc2c8d6d9e705bd0af5ffad426263f49
"2023-07-24T22:23:33Z"
python
"2023-11-13T19:47:38Z"
libs/langchain/langchain/vectorstores/pinecone.py
from __future__ import annotations import logging import uuid import warnings from typing import TYPE_CHECKING, Any, Callable, Iterable, List, Optional, Tuple, Union import numpy as np from langchain.docstore.document import Document from langchain.schema.embeddings import Embeddings from langchain.schema.vectorstore import VectorStore from langchain.utils.iter import batch_iterate from langchain.vectorstores.utils import DistanceStrategy, maximal_marginal_relevance if TYPE_CHECKING: from pinecone import Index logger = logging.getLogger(__name__) class Pinecone(VectorStore): """`Pinecone` vector store. To use, you should have the ``pinecone-client`` python package installed. Example: .. code-block:: python from langchain.vectorstores import Pinecone from langchain.embeddings.openai import OpenAIEmbeddings import pinecone # The environment should be the one specified next to the API key # in your Pinecone console pinecone.init(api_key="***", environment="...") index = pinecone.Index("langchain-demo") embeddings = OpenAIEmbeddings() vectorstore = Pinecone(index, embeddings.embed_query, "text") """ def __init__(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,207
Issue: Pinecone retriever with Cosine Similarity is treated like Cosine Distance
### Issue you'd like to raise. When using document search from the existing Pinecone index that was created using Cosine **Similarity** in the `ConversationalRetrievalChain`, the `score_theshold` would eliminate most relevant documents instead of least relevant ones because the _similarity_ metric will be converted to _distance_. In [_select_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/pinecone.py#L172) it calls the [_cosine_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L169) - which converts the similarity returned from Pinecone search to distance. Then, [filtering the documents](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L266) based on the `score_threshold` eliminates the most relevant documents instead of least relevant ones. ### Suggestion: Pinecone subclass should override the `_cosine_relevance_score_fn` to preserve the similarity, since it is what originally comes back from the Pinecone similarity search.
https://github.com/langchain-ai/langchain/issues/8207
https://github.com/langchain-ai/langchain/pull/8920
2e42ed5de68d27fe0ce676aae0cdaae778fcf16c
ff19a62afc2c8d6d9e705bd0af5ffad426263f49
"2023-07-24T22:23:33Z"
python
"2023-11-13T19:47:38Z"
libs/langchain/langchain/vectorstores/pinecone.py
self, index: Any, embedding: Union[Embeddings, Callable], text_key: str, namespace: Optional[str] = None, distance_strategy: Optional[DistanceStrategy] = DistanceStrategy.COSINE, ): """Initialize with Pinecone client.""" try: import pinecone except ImportError: raise ImportError(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,207
Issue: Pinecone retriever with Cosine Similarity is treated like Cosine Distance
### Issue you'd like to raise. When using document search from the existing Pinecone index that was created using Cosine **Similarity** in the `ConversationalRetrievalChain`, the `score_theshold` would eliminate most relevant documents instead of least relevant ones because the _similarity_ metric will be converted to _distance_. In [_select_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/pinecone.py#L172) it calls the [_cosine_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L169) - which converts the similarity returned from Pinecone search to distance. Then, [filtering the documents](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L266) based on the `score_threshold` eliminates the most relevant documents instead of least relevant ones. ### Suggestion: Pinecone subclass should override the `_cosine_relevance_score_fn` to preserve the similarity, since it is what originally comes back from the Pinecone similarity search.
https://github.com/langchain-ai/langchain/issues/8207
https://github.com/langchain-ai/langchain/pull/8920
2e42ed5de68d27fe0ce676aae0cdaae778fcf16c
ff19a62afc2c8d6d9e705bd0af5ffad426263f49
"2023-07-24T22:23:33Z"
python
"2023-11-13T19:47:38Z"
libs/langchain/langchain/vectorstores/pinecone.py
"Could not import pinecone python package. " "Please install it with `pip install pinecone-client`." ) if not isinstance(embedding, Embeddings): warnings.warn( "Passing in `embedding` as a Callable is deprecated. Please pass in an" " Embeddings object instead." ) if not isinstance(index, pinecone.index.Index): raise ValueError( f"client should be an instance of pinecone.index.Index, " f"got {type(index)}" ) self._index = index self._embedding = embedding self._text_key = text_key self._namespace = namespace self.distance_strategy = distance_strategy @property def embeddings(self) -> Optional[Embeddings]: """Access the query embedding object if available.""" if isinstance(self._embedding, Embeddings): return self._embedding return None def _embed_documents(self, texts: Iterable[str]) -> List[List[float]]: """Embed search docs.""" if isinstance(self._embedding, Embeddings): return self._embedding.embed_documents(list(texts)) return [self._embedding(t) for t in texts] def _embed_query(self, text: str) -> List[float]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,207
Issue: Pinecone retriever with Cosine Similarity is treated like Cosine Distance
### Issue you'd like to raise. When using document search from the existing Pinecone index that was created using Cosine **Similarity** in the `ConversationalRetrievalChain`, the `score_theshold` would eliminate most relevant documents instead of least relevant ones because the _similarity_ metric will be converted to _distance_. In [_select_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/pinecone.py#L172) it calls the [_cosine_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L169) - which converts the similarity returned from Pinecone search to distance. Then, [filtering the documents](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L266) based on the `score_threshold` eliminates the most relevant documents instead of least relevant ones. ### Suggestion: Pinecone subclass should override the `_cosine_relevance_score_fn` to preserve the similarity, since it is what originally comes back from the Pinecone similarity search.
https://github.com/langchain-ai/langchain/issues/8207
https://github.com/langchain-ai/langchain/pull/8920
2e42ed5de68d27fe0ce676aae0cdaae778fcf16c
ff19a62afc2c8d6d9e705bd0af5ffad426263f49
"2023-07-24T22:23:33Z"
python
"2023-11-13T19:47:38Z"
libs/langchain/langchain/vectorstores/pinecone.py
"""Embed query text.""" if isinstance(self._embedding, Embeddings): return self._embedding.embed_query(text) return self._embedding(text) def add_texts( self, texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, namespace: Optional[str] = None, batch_size: int = 32, embedding_chunk_size: int = 1000, **kwargs: Any, ) -> List[str]: """Run more texts through the embeddings and add to the vectorstore. Upsert optimization is done by chunking the embeddings and upserting them. This is done to avoid memory issues and optimize using HTTP based embeddings. For OpenAI embeddings, use pool_threads>4 when constructing the pinecone.Index, embedding_chunk_size>1000 and batch_size~64 for best performance. Args: texts: Iterable of strings to add to the vectorstore. metadatas: Optional list of metadatas associated with the texts. ids: Optional list of ids to associate with the texts. namespace: Optional pinecone namespace to add the texts to. batch_size: Batch size to use when adding the texts to the vectorstore. embedding_chunk_size: Chunk size to use when embedding the texts. Returns: List of ids from adding the texts into the vectorstore.
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,207
Issue: Pinecone retriever with Cosine Similarity is treated like Cosine Distance
### Issue you'd like to raise. When using document search from the existing Pinecone index that was created using Cosine **Similarity** in the `ConversationalRetrievalChain`, the `score_theshold` would eliminate most relevant documents instead of least relevant ones because the _similarity_ metric will be converted to _distance_. In [_select_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/pinecone.py#L172) it calls the [_cosine_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L169) - which converts the similarity returned from Pinecone search to distance. Then, [filtering the documents](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L266) based on the `score_threshold` eliminates the most relevant documents instead of least relevant ones. ### Suggestion: Pinecone subclass should override the `_cosine_relevance_score_fn` to preserve the similarity, since it is what originally comes back from the Pinecone similarity search.
https://github.com/langchain-ai/langchain/issues/8207
https://github.com/langchain-ai/langchain/pull/8920
2e42ed5de68d27fe0ce676aae0cdaae778fcf16c
ff19a62afc2c8d6d9e705bd0af5ffad426263f49
"2023-07-24T22:23:33Z"
python
"2023-11-13T19:47:38Z"
libs/langchain/langchain/vectorstores/pinecone.py
""" if namespace is None: namespace = self._namespace texts = list(texts) ids = ids or [str(uuid.uuid4()) for _ in texts] metadatas = metadatas or [{} for _ in texts] for metadata, text in zip(metadatas, texts): metadata[self._text_key] = text for i in range(0, len(texts), embedding_chunk_size): chunk_texts = texts[i : i + embedding_chunk_size] chunk_ids = ids[i : i + embedding_chunk_size] chunk_metadatas = metadatas[i : i + embedding_chunk_size] embeddings = self._embed_documents(chunk_texts) async_res = [ self._index.upsert( vectors=batch, namespace=namespace, async_req=True, **kwargs, ) for batch in batch_iterate( batch_size, zip(chunk_ids, embeddings, chunk_metadatas) ) ] [res.get() for res in async_res] return ids def similarity_search_with_score(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,207
Issue: Pinecone retriever with Cosine Similarity is treated like Cosine Distance
### Issue you'd like to raise. When using document search from the existing Pinecone index that was created using Cosine **Similarity** in the `ConversationalRetrievalChain`, the `score_theshold` would eliminate most relevant documents instead of least relevant ones because the _similarity_ metric will be converted to _distance_. In [_select_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/pinecone.py#L172) it calls the [_cosine_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L169) - which converts the similarity returned from Pinecone search to distance. Then, [filtering the documents](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L266) based on the `score_threshold` eliminates the most relevant documents instead of least relevant ones. ### Suggestion: Pinecone subclass should override the `_cosine_relevance_score_fn` to preserve the similarity, since it is what originally comes back from the Pinecone similarity search.
https://github.com/langchain-ai/langchain/issues/8207
https://github.com/langchain-ai/langchain/pull/8920
2e42ed5de68d27fe0ce676aae0cdaae778fcf16c
ff19a62afc2c8d6d9e705bd0af5ffad426263f49
"2023-07-24T22:23:33Z"
python
"2023-11-13T19:47:38Z"
libs/langchain/langchain/vectorstores/pinecone.py
self, query: str, k: int = 4, filter: Optional[dict] = None, namespace: Optional[str] = None, ) -> List[Tuple[Document, float]]: """Return pinecone documents most similar to query, along with scores. Args: query: Text to look up documents similar to. k: Number of Documents to return. Defaults to 4. filter: Dictionary of argument(s) to filter on metadata namespace: Namespace to search in. Default will search in '' namespace. Returns: List of Documents most similar to the query and score for each """ return self.similarity_search_by_vector_with_score( self._embed_query(query), k=k, filter=filter, namespace=namespace ) def similarity_search_by_vector_with_score(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,207
Issue: Pinecone retriever with Cosine Similarity is treated like Cosine Distance
### Issue you'd like to raise. When using document search from the existing Pinecone index that was created using Cosine **Similarity** in the `ConversationalRetrievalChain`, the `score_theshold` would eliminate most relevant documents instead of least relevant ones because the _similarity_ metric will be converted to _distance_. In [_select_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/pinecone.py#L172) it calls the [_cosine_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L169) - which converts the similarity returned from Pinecone search to distance. Then, [filtering the documents](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L266) based on the `score_threshold` eliminates the most relevant documents instead of least relevant ones. ### Suggestion: Pinecone subclass should override the `_cosine_relevance_score_fn` to preserve the similarity, since it is what originally comes back from the Pinecone similarity search.
https://github.com/langchain-ai/langchain/issues/8207
https://github.com/langchain-ai/langchain/pull/8920
2e42ed5de68d27fe0ce676aae0cdaae778fcf16c
ff19a62afc2c8d6d9e705bd0af5ffad426263f49
"2023-07-24T22:23:33Z"
python
"2023-11-13T19:47:38Z"
libs/langchain/langchain/vectorstores/pinecone.py
self, embedding: List[float], *, k: int = 4, filter: Optional[dict] = None, namespace: Optional[str] = None, ) -> List[Tuple[Document, float]]: """Return pinecone documents most similar to embedding, along with scores.""" if namespace is None: namespace = self._namespace docs = [] results = self._index.query( [embedding], top_k=k, include_metadata=True, namespace=namespace, filter=filter, ) for res in results["matches"]: metadata = res["metadata"] if self._text_key in metadata: text = metadata.pop(self._text_key) score = res["score"] docs.append((Document(page_content=text, metadata=metadata), score)) else: logger.warning( f"Found document with no `{self._text_key}` key. Skipping." ) return docs def similarity_search(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,207
Issue: Pinecone retriever with Cosine Similarity is treated like Cosine Distance
### Issue you'd like to raise. When using document search from the existing Pinecone index that was created using Cosine **Similarity** in the `ConversationalRetrievalChain`, the `score_theshold` would eliminate most relevant documents instead of least relevant ones because the _similarity_ metric will be converted to _distance_. In [_select_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/pinecone.py#L172) it calls the [_cosine_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L169) - which converts the similarity returned from Pinecone search to distance. Then, [filtering the documents](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L266) based on the `score_threshold` eliminates the most relevant documents instead of least relevant ones. ### Suggestion: Pinecone subclass should override the `_cosine_relevance_score_fn` to preserve the similarity, since it is what originally comes back from the Pinecone similarity search.
https://github.com/langchain-ai/langchain/issues/8207
https://github.com/langchain-ai/langchain/pull/8920
2e42ed5de68d27fe0ce676aae0cdaae778fcf16c
ff19a62afc2c8d6d9e705bd0af5ffad426263f49
"2023-07-24T22:23:33Z"
python
"2023-11-13T19:47:38Z"
libs/langchain/langchain/vectorstores/pinecone.py
self, query: str, k: int = 4, filter: Optional[dict] = None, namespace: Optional[str] = None, **kwargs: Any, ) -> List[Document]: """Return pinecone documents most similar to query. Args: query: Text to look up documents similar to. k: Number of Documents to return. Defaults to 4. filter: Dictionary of argument(s) to filter on metadata namespace: Namespace to search in. Default will search in '' namespace. Returns: List of Documents most similar to the query and score for each """ docs_and_scores = self.similarity_search_with_score( query, k=k, filter=filter, namespace=namespace, **kwargs ) return [doc for doc, _ in docs_and_scores] def _select_relevance_score_fn(self) -> Callable[[float], float]: """ The 'correct' relevance function may differ depending on a few things, including: - the distance / similarity metric used by the VectorStore - the scale of your embeddings (OpenAI's are unit normed. Many others are not!) - embedding dimensionality - etc. """
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,207
Issue: Pinecone retriever with Cosine Similarity is treated like Cosine Distance
### Issue you'd like to raise. When using document search from the existing Pinecone index that was created using Cosine **Similarity** in the `ConversationalRetrievalChain`, the `score_theshold` would eliminate most relevant documents instead of least relevant ones because the _similarity_ metric will be converted to _distance_. In [_select_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/pinecone.py#L172) it calls the [_cosine_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L169) - which converts the similarity returned from Pinecone search to distance. Then, [filtering the documents](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L266) based on the `score_threshold` eliminates the most relevant documents instead of least relevant ones. ### Suggestion: Pinecone subclass should override the `_cosine_relevance_score_fn` to preserve the similarity, since it is what originally comes back from the Pinecone similarity search.
https://github.com/langchain-ai/langchain/issues/8207
https://github.com/langchain-ai/langchain/pull/8920
2e42ed5de68d27fe0ce676aae0cdaae778fcf16c
ff19a62afc2c8d6d9e705bd0af5ffad426263f49
"2023-07-24T22:23:33Z"
python
"2023-11-13T19:47:38Z"
libs/langchain/langchain/vectorstores/pinecone.py
if self.distance_strategy == DistanceStrategy.COSINE: return self._cosine_relevance_score_fn elif self.distance_strategy == DistanceStrategy.MAX_INNER_PRODUCT: return self._max_inner_product_relevance_score_fn elif self.distance_strategy == DistanceStrategy.EUCLIDEAN_DISTANCE: return self._euclidean_relevance_score_fn else: raise ValueError( "Unknown distance strategy, must be cosine, max_inner_product " "(dot product), or euclidean" ) def max_marginal_relevance_search_by_vector( self, embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[dict] = None, namespace: Optional[str] = None, **kwargs: Any, ) -> List[Document]: """Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Args: embedding: Embedding to look up documents similar to. k: Number of Documents to return. Defaults to 4. fetch_k: Number of Documents to fetch to pass to MMR algorithm. lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,207
Issue: Pinecone retriever with Cosine Similarity is treated like Cosine Distance
### Issue you'd like to raise. When using document search from the existing Pinecone index that was created using Cosine **Similarity** in the `ConversationalRetrievalChain`, the `score_theshold` would eliminate most relevant documents instead of least relevant ones because the _similarity_ metric will be converted to _distance_. In [_select_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/pinecone.py#L172) it calls the [_cosine_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L169) - which converts the similarity returned from Pinecone search to distance. Then, [filtering the documents](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L266) based on the `score_threshold` eliminates the most relevant documents instead of least relevant ones. ### Suggestion: Pinecone subclass should override the `_cosine_relevance_score_fn` to preserve the similarity, since it is what originally comes back from the Pinecone similarity search.
https://github.com/langchain-ai/langchain/issues/8207
https://github.com/langchain-ai/langchain/pull/8920
2e42ed5de68d27fe0ce676aae0cdaae778fcf16c
ff19a62afc2c8d6d9e705bd0af5ffad426263f49
"2023-07-24T22:23:33Z"
python
"2023-11-13T19:47:38Z"
libs/langchain/langchain/vectorstores/pinecone.py
to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns: List of Documents selected by maximal marginal relevance. """ if namespace is None: namespace = self._namespace results = self._index.query( [embedding], top_k=fetch_k, include_values=True, include_metadata=True, namespace=namespace, filter=filter, ) mmr_selected = maximal_marginal_relevance( np.array([embedding], dtype=np.float32), [item["values"] for item in results["matches"]], k=k, lambda_mult=lambda_mult, ) selected = [results["matches"][i]["metadata"] for i in mmr_selected] return [ Document(page_content=metadata.pop((self._text_key)), metadata=metadata) for metadata in selected ] def max_marginal_relevance_search( self, query: str, k: int = 4,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,207
Issue: Pinecone retriever with Cosine Similarity is treated like Cosine Distance
### Issue you'd like to raise. When using document search from the existing Pinecone index that was created using Cosine **Similarity** in the `ConversationalRetrievalChain`, the `score_theshold` would eliminate most relevant documents instead of least relevant ones because the _similarity_ metric will be converted to _distance_. In [_select_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/pinecone.py#L172) it calls the [_cosine_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L169) - which converts the similarity returned from Pinecone search to distance. Then, [filtering the documents](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L266) based on the `score_threshold` eliminates the most relevant documents instead of least relevant ones. ### Suggestion: Pinecone subclass should override the `_cosine_relevance_score_fn` to preserve the similarity, since it is what originally comes back from the Pinecone similarity search.
https://github.com/langchain-ai/langchain/issues/8207
https://github.com/langchain-ai/langchain/pull/8920
2e42ed5de68d27fe0ce676aae0cdaae778fcf16c
ff19a62afc2c8d6d9e705bd0af5ffad426263f49
"2023-07-24T22:23:33Z"
python
"2023-11-13T19:47:38Z"
libs/langchain/langchain/vectorstores/pinecone.py
fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[dict] = None, namespace: Optional[str] = None, **kwargs: Any, ) -> List[Document]: """Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Args: query: Text to look up documents similar to. k: Number of Documents to return. Defaults to 4. fetch_k: Number of Documents to fetch to pass to MMR algorithm. lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns: List of Documents selected by maximal marginal relevance. """ embedding = self._embed_query(query) return self.max_marginal_relevance_search_by_vector( embedding, k, fetch_k, lambda_mult, filter, namespace ) @classmethod def get_pinecone_index( cls, index_name: Optional[str], pool_threads: int = 4, ) -> Index:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,207
Issue: Pinecone retriever with Cosine Similarity is treated like Cosine Distance
### Issue you'd like to raise. When using document search from the existing Pinecone index that was created using Cosine **Similarity** in the `ConversationalRetrievalChain`, the `score_theshold` would eliminate most relevant documents instead of least relevant ones because the _similarity_ metric will be converted to _distance_. In [_select_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/pinecone.py#L172) it calls the [_cosine_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L169) - which converts the similarity returned from Pinecone search to distance. Then, [filtering the documents](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L266) based on the `score_threshold` eliminates the most relevant documents instead of least relevant ones. ### Suggestion: Pinecone subclass should override the `_cosine_relevance_score_fn` to preserve the similarity, since it is what originally comes back from the Pinecone similarity search.
https://github.com/langchain-ai/langchain/issues/8207
https://github.com/langchain-ai/langchain/pull/8920
2e42ed5de68d27fe0ce676aae0cdaae778fcf16c
ff19a62afc2c8d6d9e705bd0af5ffad426263f49
"2023-07-24T22:23:33Z"
python
"2023-11-13T19:47:38Z"
libs/langchain/langchain/vectorstores/pinecone.py
"""Return a Pinecone Index instance. Args: index_name: Name of the index to use. pool_threads: Number of threads to use for index upsert. Returns: Pinecone Index instance.""" try: import pinecone except ImportError: raise ValueError( "Could not import pinecone python package. " "Please install it with `pip install pinecone-client`." ) indexes = pinecone.list_indexes() if index_name in indexes: index = pinecone.Index(index_name, pool_threads=pool_threads) elif len(indexes) == 0: raise ValueError( "No active indexes found in your Pinecone project, " "are you sure you're using the right Pinecone API key and Environment? " "Please double check your Pinecone dashboard." ) else: raise ValueError( f"Index '{index_name}' not found in your Pinecone project. " f"Did you mean one of the following indexes: {', '.join(indexes)}" ) return index @classmethod def from_texts(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,207
Issue: Pinecone retriever with Cosine Similarity is treated like Cosine Distance
### Issue you'd like to raise. When using document search from the existing Pinecone index that was created using Cosine **Similarity** in the `ConversationalRetrievalChain`, the `score_theshold` would eliminate most relevant documents instead of least relevant ones because the _similarity_ metric will be converted to _distance_. In [_select_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/pinecone.py#L172) it calls the [_cosine_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L169) - which converts the similarity returned from Pinecone search to distance. Then, [filtering the documents](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L266) based on the `score_threshold` eliminates the most relevant documents instead of least relevant ones. ### Suggestion: Pinecone subclass should override the `_cosine_relevance_score_fn` to preserve the similarity, since it is what originally comes back from the Pinecone similarity search.
https://github.com/langchain-ai/langchain/issues/8207
https://github.com/langchain-ai/langchain/pull/8920
2e42ed5de68d27fe0ce676aae0cdaae778fcf16c
ff19a62afc2c8d6d9e705bd0af5ffad426263f49
"2023-07-24T22:23:33Z"
python
"2023-11-13T19:47:38Z"
libs/langchain/langchain/vectorstores/pinecone.py
cls, texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, batch_size: int = 32, text_key: str = "text", namespace: Optional[str] = None, index_name: Optional[str] = None, upsert_kwargs: Optional[dict] = None, pool_threads: int = 4, embeddings_chunk_size: int = 1000, **kwargs: Any, ) -> Pinecone: """Construct Pinecone wrapper from raw documents. This is a user friendly interface that: 1. Embeds documents. 2. Adds the documents to a provided Pinecone index This is intended to be a quick way to get started. The `pool_threads` affects the speed of the upsert operations. Example: .. code-block:: python from langchain.vectorstores import Pinecone from langchain.embeddings import OpenAIEmbeddings import pinecone # The environment should be the one specified next to the API key # in your Pinecone console pinecone.init(api_key="***", environment="...") embeddings = OpenAIEmbeddings() pinecone = Pinecone.from_texts(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,207
Issue: Pinecone retriever with Cosine Similarity is treated like Cosine Distance
### Issue you'd like to raise. When using document search from the existing Pinecone index that was created using Cosine **Similarity** in the `ConversationalRetrievalChain`, the `score_theshold` would eliminate most relevant documents instead of least relevant ones because the _similarity_ metric will be converted to _distance_. In [_select_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/pinecone.py#L172) it calls the [_cosine_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L169) - which converts the similarity returned from Pinecone search to distance. Then, [filtering the documents](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L266) based on the `score_threshold` eliminates the most relevant documents instead of least relevant ones. ### Suggestion: Pinecone subclass should override the `_cosine_relevance_score_fn` to preserve the similarity, since it is what originally comes back from the Pinecone similarity search.
https://github.com/langchain-ai/langchain/issues/8207
https://github.com/langchain-ai/langchain/pull/8920
2e42ed5de68d27fe0ce676aae0cdaae778fcf16c
ff19a62afc2c8d6d9e705bd0af5ffad426263f49
"2023-07-24T22:23:33Z"
python
"2023-11-13T19:47:38Z"
libs/langchain/langchain/vectorstores/pinecone.py
texts, embeddings, index_name="langchain-demo" ) """ pinecone_index = cls.get_pinecone_index(index_name, pool_threads) pinecone = cls(pinecone_index, embedding, text_key, namespace, **kwargs) pinecone.add_texts( texts, metadatas=metadatas, ids=ids, namespace=namespace, batch_size=batch_size, embedding_chunk_size=embeddings_chunk_size, **(upsert_kwargs or {}), ) return pinecone @classmethod def from_existing_index( cls, index_name: str, embedding: Embeddings, text_key: str = "text", namespace: Optional[str] = None, pool_threads: int = 4, ) -> Pinecone: """Load pinecone vectorstore from index name.""" pinecone_index = cls.get_pinecone_index(index_name, pool_threads) return cls(pinecone_index, embedding, text_key, namespace) def delete(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,207
Issue: Pinecone retriever with Cosine Similarity is treated like Cosine Distance
### Issue you'd like to raise. When using document search from the existing Pinecone index that was created using Cosine **Similarity** in the `ConversationalRetrievalChain`, the `score_theshold` would eliminate most relevant documents instead of least relevant ones because the _similarity_ metric will be converted to _distance_. In [_select_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/pinecone.py#L172) it calls the [_cosine_relevance_score_fn](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L169) - which converts the similarity returned from Pinecone search to distance. Then, [filtering the documents](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/base.py#L266) based on the `score_threshold` eliminates the most relevant documents instead of least relevant ones. ### Suggestion: Pinecone subclass should override the `_cosine_relevance_score_fn` to preserve the similarity, since it is what originally comes back from the Pinecone similarity search.
https://github.com/langchain-ai/langchain/issues/8207
https://github.com/langchain-ai/langchain/pull/8920
2e42ed5de68d27fe0ce676aae0cdaae778fcf16c
ff19a62afc2c8d6d9e705bd0af5ffad426263f49
"2023-07-24T22:23:33Z"
python
"2023-11-13T19:47:38Z"
libs/langchain/langchain/vectorstores/pinecone.py
self, ids: Optional[List[str]] = None, delete_all: Optional[bool] = None, namespace: Optional[str] = None, filter: Optional[dict] = None, **kwargs: Any, ) -> None: """Delete by vector IDs or filter. Args: ids: List of ids to delete. filter: Dictionary of conditions to filter vectors to delete. """ if namespace is None: namespace = self._namespace if delete_all: self._index.delete(delete_all=True, namespace=namespace, **kwargs) elif ids is not None: chunk_size = 1000 for i in range(0, len(ids), chunk_size): chunk = ids[i : i + chunk_size] self._index.delete(ids=chunk, namespace=namespace, **kwargs) elif filter is not None: self._index.delete(filter=filter, namespace=namespace, **kwargs) else: raise ValueError("Either ids, delete_all, or filter must be provided.") return None
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,356
Issue: Notion DB loader for doesn't supports some properties
### Issue you'd like to raise. notion page properties https://developers.notion.com/reference/page-property-values Current version Notion DB loader for doesn't supports following properties for metadata - `checkbox` - `email` - `number` - `select` ### Suggestion: I would like to make a PR to fix this issue if it's okay.
https://github.com/langchain-ai/langchain/issues/13356
https://github.com/langchain-ai/langchain/pull/13358
c9b9359647f1f1b24f106b93f180509db7932950
3b5e8bacfa6d5b7c223cb93f09c6c21d39542b43
"2023-11-14T17:20:22Z"
python
"2023-11-15T04:31:12Z"
libs/langchain/langchain/document_loaders/notiondb.py
from typing import Any, Dict, List, Optional import requests from langchain.docstore.document import Document from langchain.document_loaders.base import BaseLoader NOTION_BASE_URL = "https://api.notion.com/v1" DATABASE_URL = NOTION_BASE_URL + "/databases/{database_id}/query" PAGE_URL = NOTION_BASE_URL + "/pages/{page_id}" BLOCK_URL = NOTION_BASE_URL + "/blocks/{block_id}/children" class NotionDBLoader(BaseLoader):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,356
Issue: Notion DB loader for doesn't supports some properties
### Issue you'd like to raise. notion page properties https://developers.notion.com/reference/page-property-values Current version Notion DB loader for doesn't supports following properties for metadata - `checkbox` - `email` - `number` - `select` ### Suggestion: I would like to make a PR to fix this issue if it's okay.
https://github.com/langchain-ai/langchain/issues/13356
https://github.com/langchain-ai/langchain/pull/13358
c9b9359647f1f1b24f106b93f180509db7932950
3b5e8bacfa6d5b7c223cb93f09c6c21d39542b43
"2023-11-14T17:20:22Z"
python
"2023-11-15T04:31:12Z"
libs/langchain/langchain/document_loaders/notiondb.py
"""Load from `Notion DB`. Reads content from pages within a Notion Database. Args: integration_token (str): Notion integration token. database_id (str): Notion database id. request_timeout_sec (int): Timeout for Notion requests in seconds. Defaults to 10. """ def __init__( self, integration_token: str, database_id: str, request_timeout_sec: Optional[int] = 10, ) -> None: """Initialize with parameters.""" if not integration_token: raise ValueError("integration_token must be provided") if not database_id: raise ValueError("database_id must be provided") self.token = integration_token self.database_id = database_id self.headers = { "Authorization": "Bearer " + self.token, "Content-Type": "application/json", "Notion-Version": "2022-06-28", } self.request_timeout_sec = request_timeout_sec def load(self) -> List[Document]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,356
Issue: Notion DB loader for doesn't supports some properties
### Issue you'd like to raise. notion page properties https://developers.notion.com/reference/page-property-values Current version Notion DB loader for doesn't supports following properties for metadata - `checkbox` - `email` - `number` - `select` ### Suggestion: I would like to make a PR to fix this issue if it's okay.
https://github.com/langchain-ai/langchain/issues/13356
https://github.com/langchain-ai/langchain/pull/13358
c9b9359647f1f1b24f106b93f180509db7932950
3b5e8bacfa6d5b7c223cb93f09c6c21d39542b43
"2023-11-14T17:20:22Z"
python
"2023-11-15T04:31:12Z"
libs/langchain/langchain/document_loaders/notiondb.py
"""Load documents from the Notion database. Returns: List[Document]: List of documents. """ page_summaries = self._retrieve_page_summaries() return list(self.load_page(page_summary) for page_summary in page_summaries) def _retrieve_page_summaries( self, query_dict: Dict[str, Any] = {"page_size": 100} ) -> List[Dict[str, Any]]: """Get all the pages from a Notion database.""" pages: List[Dict[str, Any]] = [] while True: data = self._request( DATABASE_URL.format(database_id=self.database_id), method="POST", query_dict=query_dict, ) pages.extend(data.get("results")) if not data.get("has_more"): break query_dict["start_cursor"] = data.get("next_cursor") return pages def load_page(self, page_summary: Dict[str, Any]) -> Document:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,356
Issue: Notion DB loader for doesn't supports some properties
### Issue you'd like to raise. notion page properties https://developers.notion.com/reference/page-property-values Current version Notion DB loader for doesn't supports following properties for metadata - `checkbox` - `email` - `number` - `select` ### Suggestion: I would like to make a PR to fix this issue if it's okay.
https://github.com/langchain-ai/langchain/issues/13356
https://github.com/langchain-ai/langchain/pull/13358
c9b9359647f1f1b24f106b93f180509db7932950
3b5e8bacfa6d5b7c223cb93f09c6c21d39542b43
"2023-11-14T17:20:22Z"
python
"2023-11-15T04:31:12Z"
libs/langchain/langchain/document_loaders/notiondb.py
"""Read a page. Args: page_summary: Page summary from Notion API. """ page_id = page_summary["id"] metadata: Dict[str, Any] = {} for prop_name, prop_data in page_summary["properties"].items(): prop_type = prop_data["type"] if prop_type == "rich_text": value = ( prop_data["rich_text"][0]["plain_text"] if prop_data["rich_text"] else None ) elif prop_type == "title": value = ( prop_data["title"][0]["plain_text"] if prop_data["title"] else None ) elif prop_type == "multi_select": value = ( [item["name"] for item in prop_data["multi_select"]] if prop_data["multi_select"] else [] ) elif prop_type == "url": value = prop_data["url"]
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,356
Issue: Notion DB loader for doesn't supports some properties
### Issue you'd like to raise. notion page properties https://developers.notion.com/reference/page-property-values Current version Notion DB loader for doesn't supports following properties for metadata - `checkbox` - `email` - `number` - `select` ### Suggestion: I would like to make a PR to fix this issue if it's okay.
https://github.com/langchain-ai/langchain/issues/13356
https://github.com/langchain-ai/langchain/pull/13358
c9b9359647f1f1b24f106b93f180509db7932950
3b5e8bacfa6d5b7c223cb93f09c6c21d39542b43
"2023-11-14T17:20:22Z"
python
"2023-11-15T04:31:12Z"
libs/langchain/langchain/document_loaders/notiondb.py
elif prop_type == "unique_id": value = ( f'{prop_data["unique_id"]["prefix"]}-{prop_data["unique_id"]["number"]}' if prop_data["unique_id"] else None ) elif prop_type == "status": value = prop_data["status"]["name"] if prop_data["status"] else None elif prop_type == "people": value = ( [item["name"] for item in prop_data["people"]] if prop_data["people"] else [] ) elif prop_type == "date": value = prop_data["date"] if prop_data["date"] else None elif prop_type == "last_edited_time": value = ( prop_data["last_edited_time"] if prop_data["last_edited_time"] else None ) elif prop_type == "created_time": value = prop_data["created_time"] if prop_data["created_time"] else None else: value = None metadata[prop_name.lower()] = value metadata["id"] = page_id return Document(page_content=self._load_blocks(page_id), metadata=metadata) def _load_blocks(self, block_id: str, num_tabs: int = 0) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,356
Issue: Notion DB loader for doesn't supports some properties
### Issue you'd like to raise. notion page properties https://developers.notion.com/reference/page-property-values Current version Notion DB loader for doesn't supports following properties for metadata - `checkbox` - `email` - `number` - `select` ### Suggestion: I would like to make a PR to fix this issue if it's okay.
https://github.com/langchain-ai/langchain/issues/13356
https://github.com/langchain-ai/langchain/pull/13358
c9b9359647f1f1b24f106b93f180509db7932950
3b5e8bacfa6d5b7c223cb93f09c6c21d39542b43
"2023-11-14T17:20:22Z"
python
"2023-11-15T04:31:12Z"
libs/langchain/langchain/document_loaders/notiondb.py
"""Read a block and its children.""" result_lines_arr: List[str] = [] cur_block_id: str = block_id while cur_block_id: data = self._request(BLOCK_URL.format(block_id=cur_block_id)) for result in data["results"]: result_obj = result[result["type"]] if "rich_text" not in result_obj: continue cur_result_text_arr: List[str] = [] for rich_text in result_obj["rich_text"]: if "text" in rich_text: cur_result_text_arr.append( "\t" * num_tabs + rich_text["text"]["content"] ) if result["has_children"]: children_text = self._load_blocks( result["id"], num_tabs=num_tabs + 1 ) cur_result_text_arr.append(children_text) result_lines_arr.append("\n".join(cur_result_text_arr)) cur_block_id = data.get("next_cursor") return "\n".join(result_lines_arr) def _request(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,356
Issue: Notion DB loader for doesn't supports some properties
### Issue you'd like to raise. notion page properties https://developers.notion.com/reference/page-property-values Current version Notion DB loader for doesn't supports following properties for metadata - `checkbox` - `email` - `number` - `select` ### Suggestion: I would like to make a PR to fix this issue if it's okay.
https://github.com/langchain-ai/langchain/issues/13356
https://github.com/langchain-ai/langchain/pull/13358
c9b9359647f1f1b24f106b93f180509db7932950
3b5e8bacfa6d5b7c223cb93f09c6c21d39542b43
"2023-11-14T17:20:22Z"
python
"2023-11-15T04:31:12Z"
libs/langchain/langchain/document_loaders/notiondb.py
self, url: str, method: str = "GET", query_dict: Dict[str, Any] = {} ) -> Any: res = requests.request( method, url, headers=self.headers, json=query_dict, timeout=self.request_timeout_sec, ) res.raise_for_status() return res.json()
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
from __future__ import annotations import asyncio import contextlib import enum import logging import uuid from functools import partial from typing import ( TYPE_CHECKING, Any, Callable, Dict, Generator, Iterable, List, Optional, Tuple, Type,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
) import numpy as np import sqlalchemy from sqlalchemy import delete from sqlalchemy.dialects.postgresql import UUID from sqlalchemy.orm import Session try: from sqlalchemy.orm import declarative_base except ImportError: from sqlalchemy.ext.declarative import declarative_base from langchain.docstore.document import Document from langchain.schema.embeddings import Embeddings from langchain.schema.vectorstore import VectorStore from langchain.utils import get_from_dict_or_env from langchain.vectorstores.utils import maximal_marginal_relevance if TYPE_CHECKING: from langchain.vectorstores._pgvector_data_models import CollectionStore class DistanceStrategy(str, enum.Enum): """Enumerator of the Distance strategies.""" EUCLIDEAN = "l2" COSINE = "cosine" MAX_INNER_PRODUCT = "inner" DEFAULT_DISTANCE_STRATEGY = DistanceStrategy.COSINE Base = declarative_base() _LANGCHAIN_DEFAULT_COLLECTION_NAME = "langchain" class BaseModel(Base): """Base model for the SQL stores.""" __abstract__ = True uuid = sqlalchemy.Column(UUID(as_uuid=True), primary_key=True, default=uuid.uuid4) def _results_to_docs(docs_and_scores: Any) -> List[Document]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
"""Return docs from docs and scores.""" return [doc for doc, _ in docs_and_scores] class PGVector(VectorStore): """`Postgres`/`PGVector` vector store. To use, you should have the ``pgvector`` python package installed. Args: connection_string: Postgres connection string. embedding_function: Any embedding function implementing `langchain.embeddings.base.Embeddings` interface. collection_name: The name of the collection to use. (default: langchain) NOTE: This is not the name of the table, but the name of the collection. The tables will be created when initializing the store (if not exists) So, make sure the user has the right permissions to create tables. distance_strategy: The distance strategy to use. (default: COSINE) pre_delete_collection: If True, will delete the collection if it exists. (default: False). Useful for testing. engine_args: SQLAlchemy's create engine arguments. Example: .. code-block:: python from langchain.vectorstores import PGVector from langchain.embeddings.openai import OpenAIEmbeddings CONNECTION_STRING = "postgresql+psycopg2://hwc@localhost:5432/test3" COLLECTION_NAME = "state_of_the_union_test" embeddings = OpenAIEmbeddings() vectorestore = PGVector.from_documents( embedding=embeddings, documents=docs, collection_name=COLLECTION_NAME,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
connection_string=CONNECTION_STRING, ) """ def __init__( self, connection_string: str, embedding_function: Embeddings, collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME, collection_metadata: Optional[dict] = None, distance_strategy: DistanceStrategy = DEFAULT_DISTANCE_STRATEGY, pre_delete_collection: bool = False, logger: Optional[logging.Logger] = None, relevance_score_fn: Optional[Callable[[float], float]] = None, *, connection: Optional[sqlalchemy.engine.Connection] = None, engine_args: Optional[dict[str, Any]] = None, ) -> None: self.connection_string = connection_string self.embedding_function = embedding_function self.collection_name = collection_name self.collection_metadata = collection_metadata self._distance_strategy = distance_strategy self.pre_delete_collection = pre_delete_collection self.logger = logger or logging.getLogger(__name__) self.override_relevance_score_fn = relevance_score_fn self.engine_args = engine_args or {} self._conn = connection if connection else self.connect() self.__post_init__() def __post_init__(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
self, ) -> None: """Initialize the store.""" self.create_vector_extension() from langchain.vectorstores._pgvector_data_models import ( CollectionStore, EmbeddingStore, ) self.CollectionStore = CollectionStore self.EmbeddingStore = EmbeddingStore self.create_tables_if_not_exists() self.create_collection() @property def embeddings(self) -> Embeddings:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
return self.embedding_function def connect(self) -> sqlalchemy.engine.Connection: engine = sqlalchemy.create_engine(self.connection_string, **self.engine_args) conn = engine.connect() return conn def create_vector_extension(self) -> None: try: with Session(self._conn) as session: statement = sqlalchemy.text( "BEGIN;" "SELECT pg_advisory_xact_lock(1573678846307946496);" "CREATE EXTENSION IF NOT EXISTS vector;" "COMMIT;" ) session.execute(statement) session.commit() except Exception as e: raise Exception(f"Failed to create vector extension: {e}") from e def create_tables_if_not_exists(self) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
with self._conn.begin(): Base.metadata.create_all(self._conn) def drop_tables(self) -> None: with self._conn.begin(): Base.metadata.drop_all(self._conn) def create_collection(self) -> None: if self.pre_delete_collection: self.delete_collection() with Session(self._conn) as session: self.CollectionStore.get_or_create( session, self.collection_name, cmetadata=self.collection_metadata ) def delete_collection(self) -> None: self.logger.debug("Trying to delete collection") with Session(self._conn) as session: collection = self.get_collection(session) if not collection: self.logger.warning("Collection not found") return session.delete(collection) session.commit() @contextlib.contextmanager def _make_session(self) -> Generator[Session, None, None]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
"""Create a context manager for the session, bind to _conn string.""" yield Session(self._conn) def delete( self, ids: Optional[List[str]] = None, **kwargs: Any, ) -> None: """Delete vectors by ids or uuids. Args: ids: List of ids to delete. """ with Session(self._conn) as session: if ids is not None: self.logger.debug( "Trying to delete vectors by ids (represented by the model " "using the custom ids field)" ) stmt = delete(self.EmbeddingStore).where( self.EmbeddingStore.custom_id.in_(ids) ) session.execute(stmt) session.commit() def get_collection(self, session: Session) -> Optional["CollectionStore"]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
return self.CollectionStore.get_by_name(session, self.collection_name) @classmethod def __from( cls, texts: List[str], embeddings: List[List[float]], embedding: Embeddings, metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME, distance_strategy: DistanceStrategy = DEFAULT_DISTANCE_STRATEGY, connection_string: Optional[str] = None, pre_delete_collection: bool = False, **kwargs: Any, ) -> PGVector: if ids is None: ids = [str(uuid.uuid1()) for _ in texts] if not metadatas: metadatas = [{} for _ in texts] if connection_string is None: connection_string = cls.get_connection_string(kwargs) store = cls( connection_string=connection_string, collection_name=collection_name, embedding_function=embedding, distance_strategy=distance_strategy, pre_delete_collection=pre_delete_collection,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
**kwargs, ) store.add_embeddings( texts=texts, embeddings=embeddings, metadatas=metadatas, ids=ids, **kwargs ) return store def add_embeddings( self, texts: Iterable[str], embeddings: List[List[float]], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any, ) -> List[str]: """Add embeddings to the vectorstore. Args: texts: Iterable of strings to add to the vectorstore. embeddings: List of list of embedding vectors. metadatas: List of metadatas associated with the texts. kwargs: vectorstore specific parameters """ if ids is None: ids = [str(uuid.uuid1()) for _ in texts] if not metadatas: metadatas = [{} for _ in texts] with Session(self._conn) as session: collection = self.get_collection(session) if not collection: raise ValueError("Collection not found") for text, metadata, embedding, id in zip(texts, metadatas, embeddings, ids):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
embedding_store = self.EmbeddingStore( embedding=embedding, document=text, cmetadata=metadata, custom_id=id, collection_id=collection.uuid, ) session.add(embedding_store) session.commit() return ids def add_texts( self, texts: Iterable[str], metadatas: Optional[List[dict]] = None, ids: Optional[List[str]] = None, **kwargs: Any, ) -> List[str]: """Run more texts through the embeddings and add to the vectorstore. Args: texts: Iterable of strings to add to the vectorstore. metadatas: Optional list of metadatas associated with the texts. kwargs: vectorstore specific parameters Returns: List of ids from adding the texts into the vectorstore. """ embeddings = self.embedding_function.embed_documents(list(texts)) return self.add_embeddings( texts=texts, embeddings=embeddings, metadatas=metadatas, ids=ids, **kwargs ) def similarity_search(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
self, query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any, ) -> List[Document]: """Run similarity search with PGVector with distance. Args: query (str): Query text to search for. k (int): Number of results to return. Defaults to 4. filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None. Returns: List of Documents most similar to the query. """ embedding = self.embedding_function.embed_query(text=query) return self.similarity_search_by_vector( embedding=embedding, k=k, filter=filter, ) def similarity_search_with_score(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
self, query: str, k: int = 4, filter: Optional[dict] = None, ) -> List[Tuple[Document, float]]: """Return docs most similar to query. Args: query: Text to look up documents similar to. k: Number of Documents to return. Defaults to 4. filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None. Returns: List of Documents most similar to the query and score for each. """ embedding = self.embedding_function.embed_query(query) docs = self.similarity_search_with_score_by_vector( embedding=embedding, k=k, filter=filter ) return docs @property def distance_strategy(self) -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
if self._distance_strategy == DistanceStrategy.EUCLIDEAN: return self.EmbeddingStore.embedding.l2_distance elif self._distance_strategy == DistanceStrategy.COSINE: return self.EmbeddingStore.embedding.cosine_distance elif self._distance_strategy == DistanceStrategy.MAX_INNER_PRODUCT: return self.EmbeddingStore.embedding.max_inner_product else: raise ValueError( f"Got unexpected value for distance: {self._distance_strategy}. " f"Should be one of {', '.join([ds.value for ds in DistanceStrategy])}." ) def similarity_search_with_score_by_vector( self, embedding: List[float], k: int = 4, filter: Optional[dict] = None, ) -> List[Tuple[Document, float]]: results = self.__query_collection(embedding=embedding, k=k, filter=filter) return self._results_to_docs_and_scores(results) def _results_to_docs_and_scores(self, results: Any) -> List[Tuple[Document, float]]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
"""Return docs and scores from results.""" docs = [ ( Document( page_content=result.EmbeddingStore.document, metadata=result.EmbeddingStore.cmetadata, ), result.distance if self.embedding_function is not None else None, ) for result in results ] return docs def __query_collection( self, embedding: List[float], k: int = 4, filter: Optional[Dict[str, str]] = None, ) -> List[Any]: """Query the collection.""" with Session(self._conn) as session: collection = self.get_collection(session) if not collection: raise ValueError("Collection not found") filter_by = self.EmbeddingStore.collection_id == collection.uuid if filter is not None: filter_clauses = [] for key, value in filter.items(): IN = "in" if isinstance(value, dict) and IN in map(str.lower, value):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
value_case_insensitive = { k.lower(): v for k, v in value.items() } filter_by_metadata = self.EmbeddingStore.cmetadata[ key ].astext.in_(value_case_insensitive[IN]) filter_clauses.append(filter_by_metadata) else: filter_by_metadata = self.EmbeddingStore.cmetadata[ key ].astext == str(value) filter_clauses.append(filter_by_metadata) filter_by = sqlalchemy.and_(filter_by, *filter_clauses) _type = self.EmbeddingStore results: List[Any] = ( session.query( self.EmbeddingStore, self.distance_strategy(embedding).label("distance"), ) .filter(filter_by) .order_by(sqlalchemy.asc("distance")) .join( self.CollectionStore, self.EmbeddingStore.collection_id == self.CollectionStore.uuid, ) .limit(k) .all() ) return results def similarity_search_by_vector(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
self, embedding: List[float], k: int = 4, filter: Optional[dict] = None, **kwargs: Any, ) -> List[Document]: """Return docs most similar to embedding vector. Args: embedding: Embedding to look up documents similar to. k: Number of Documents to return. Defaults to 4. filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None. Returns: List of Documents most similar to the query vector. """ docs_and_scores = self.similarity_search_with_score_by_vector( embedding=embedding, k=k, filter=filter ) return _results_to_docs(docs_and_scores) @classmethod def from_texts(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
cls: Type[PGVector], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME, distance_strategy: DistanceStrategy = DEFAULT_DISTANCE_STRATEGY, ids: Optional[List[str]] = None, pre_delete_collection: bool = False, **kwargs: Any, ) -> PGVector: """ Return VectorStore initialized from texts and embeddings. Postgres connection string is required "Either pass it as a parameter or set the PGVECTOR_CONNECTION_STRING environment variable. """ embeddings = embedding.embed_documents(list(texts)) return cls.__from( texts, embeddings, embedding, metadatas=metadatas, ids=ids, collection_name=collection_name, distance_strategy=distance_strategy, pre_delete_collection=pre_delete_collection, **kwargs, ) @classmethod def from_embeddings(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
cls, text_embeddings: List[Tuple[str, List[float]]], embedding: Embeddings, metadatas: Optional[List[dict]] = None, collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME, distance_strategy: DistanceStrategy = DEFAULT_DISTANCE_STRATEGY, ids: Optional[List[str]] = None, pre_delete_collection: bool = False, **kwargs: Any, ) -> PGVector:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
"""Construct PGVector wrapper from raw documents and pre- generated embeddings. Return VectorStore initialized from documents and embeddings. Postgres connection string is required "Either pass it as a parameter or set the PGVECTOR_CONNECTION_STRING environment variable. Example: .. code-block:: python from langchain.vectorstores import PGVector from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() text_embeddings = embeddings.embed_documents(texts) text_embedding_pairs = list(zip(texts, text_embeddings)) faiss = PGVector.from_embeddings(text_embedding_pairs, embeddings) """ texts = [t[0] for t in text_embeddings] embeddings = [t[1] for t in text_embeddings] return cls.__from( texts, embeddings, embedding, metadatas=metadatas, ids=ids, collection_name=collection_name, distance_strategy=distance_strategy, pre_delete_collection=pre_delete_collection, **kwargs, ) @classmethod def from_existing_index(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
cls: Type[PGVector], embedding: Embeddings, collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME, distance_strategy: DistanceStrategy = DEFAULT_DISTANCE_STRATEGY, pre_delete_collection: bool = False, **kwargs: Any, ) -> PGVector: """ Get instance of an existing PGVector store.This method will return the instance of the store without inserting any new embeddings """ connection_string = cls.get_connection_string(kwargs) store = cls( connection_string=connection_string, collection_name=collection_name, embedding_function=embedding, distance_strategy=distance_strategy, pre_delete_collection=pre_delete_collection, ) return store @classmethod def get_connection_string(cls, kwargs: Dict[str, Any]) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
connection_string: str = get_from_dict_or_env( data=kwargs, key="connection_string", env_key="PGVECTOR_CONNECTION_STRING", ) if not connection_string: raise ValueError( "Postgres connection string is required" "Either pass it as a parameter" "or set the PGVECTOR_CONNECTION_STRING environment variable." ) return connection_string @classmethod def from_documents(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
cls: Type[PGVector], documents: List[Document], embedding: Embeddings, collection_name: str = _LANGCHAIN_DEFAULT_COLLECTION_NAME, distance_strategy: DistanceStrategy = DEFAULT_DISTANCE_STRATEGY, ids: Optional[List[str]] = None, pre_delete_collection: bool = False, **kwargs: Any, ) -> PGVector: """ Return VectorStore initialized from documents and embeddings. Postgres connection string is required "Either pass it as a parameter or set the PGVECTOR_CONNECTION_STRING environment variable. """ texts = [d.page_content for d in documents] metadatas = [d.metadata for d in documents] connection_string = cls.get_connection_string(kwargs) kwargs["connection_string"] = connection_string return cls.from_texts( texts=texts, pre_delete_collection=pre_delete_collection,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
embedding=embedding, distance_strategy=distance_strategy, metadatas=metadatas, ids=ids, collection_name=collection_name, **kwargs, ) @classmethod def connection_string_from_db_params( cls, driver: str, host: str, port: int, database: str, user: str, password: str, ) -> str: """Return connection string from database parameters.""" return f"postgresql+{driver}://{user}:{password}@{host}:{port}/{database}" def _select_relevance_score_fn(self) -> Callable[[float], float]: """ The 'correct' relevance function may differ depending on a few things, including: - the distance / similarity metric used by the VectorStore - the scale of your embeddings (OpenAI's are unit normed. Many others are not!) - embedding dimensionality - etc. """ if self.override_relevance_score_fn is not None: return self.override_relevance_score_fn
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
if self._distance_strategy == DistanceStrategy.COSINE: return self._cosine_relevance_score_fn elif self._distance_strategy == DistanceStrategy.EUCLIDEAN: return self._euclidean_relevance_score_fn elif self._distance_strategy == DistanceStrategy.MAX_INNER_PRODUCT: return self._max_inner_product_relevance_score_fn else: raise ValueError( "No supported normalization function" f" for distance_strategy of {self._distance_strategy}." "Consider providing relevance_score_fn to PGVector constructor." ) def max_marginal_relevance_search_with_score_by_vector( self, embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[Dict[str, str]] = None, **kwargs: Any, ) -> List[Tuple[Document, float]]: """Return docs selected using the maximal marginal relevance with score to embedding vector. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Args: embedding: Embedding to look up documents similar to. k (int): Number of Documents to return. Defaults to 4.
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
fetch_k (int): Number of Documents to fetch to pass to MMR algorithm. Defaults to 20. lambda_mult (float): Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None. Returns: List[Tuple[Document, float]]: List of Documents selected by maximal marginal relevance to the query and score for each. """ results = self.__query_collection(embedding=embedding, k=fetch_k, filter=filter) embedding_list = [result.EmbeddingStore.embedding for result in results] mmr_selected = maximal_marginal_relevance( np.array(embedding, dtype=np.float32), embedding_list, k=k, lambda_mult=lambda_mult, ) candidates = self._results_to_docs_and_scores(results) return [r for i, r in enumerate(candidates) if i in mmr_selected] def max_marginal_relevance_search( self, query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[Dict[str, str]] = None, **kwargs: Any, ) -> List[Document]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
"""Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Args: query (str): Text to look up documents similar to. k (int): Number of Documents to return. Defaults to 4. fetch_k (int): Number of Documents to fetch to pass to MMR algorithm. Defaults to 20. lambda_mult (float): Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None. Returns: List[Document]: List of Documents selected by maximal marginal relevance. """ embedding = self.embedding_function.embed_query(query) return self.max_marginal_relevance_search_by_vector( embedding, k=k, fetch_k=fetch_k, lambda_mult=lambda_mult, filter=filter, **kwargs, ) def max_marginal_relevance_search_with_score( self, query: str, k: int = 4, fetch_k: int = 20,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
lambda_mult: float = 0.5, filter: Optional[dict] = None, **kwargs: Any, ) -> List[Tuple[Document, float]]: """Return docs selected using the maximal marginal relevance with score. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Args: query (str): Text to look up documents similar to. k (int): Number of Documents to return. Defaults to 4. fetch_k (int): Number of Documents to fetch to pass to MMR algorithm. Defaults to 20. lambda_mult (float): Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None. Returns: List[Tuple[Document, float]]: List of Documents selected by maximal marginal relevance to the query and score for each. """ embedding = self.embedding_function.embed_query(query) docs = self.max_marginal_relevance_search_with_score_by_vector( embedding=embedding, k=k, fetch_k=fetch_k, lambda_mult=lambda_mult, filter=filter, **kwargs, )
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
return docs def max_marginal_relevance_search_by_vector( self, embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[Dict[str, str]] = None, **kwargs: Any, ) -> List[Document]: """Return docs selected using the maximal marginal relevance to embedding vector. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Args: embedding (str): Text to look up documents similar to. k (int): Number of Documents to return. Defaults to 4. fetch_k (int): Number of Documents to fetch to pass to MMR algorithm. Defaults to 20. lambda_mult (float): Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None. Returns: List[Document]: List of Documents selected by maximal marginal relevance. """ docs_and_scores = self.max_marginal_relevance_search_with_score_by_vector( embedding, k=k,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
9,696
No way to Close an open connection in PGVector.py
### Feature request Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database. Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database. `def __del__(self): # Close the session (and thus the connection) when the instance is destroyed. self.session.close()` ### Motivation The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error ### Your contribution I guess, may be.
https://github.com/langchain-ai/langchain/issues/9696
https://github.com/langchain-ai/langchain/pull/13232
85a77d2c2795b8f0463d809e459c68d4277bd080
1726d5dcdd495fa204c2907ce826df81527e0f14
"2023-08-24T11:57:09Z"
python
"2023-11-15T20:34:37Z"
libs/langchain/langchain/vectorstores/pgvector.py
fetch_k=fetch_k, lambda_mult=lambda_mult, filter=filter, **kwargs, ) return _results_to_docs(docs_and_scores) async def amax_marginal_relevance_search_by_vector( self, embedding: List[float], k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[Dict[str, str]] = None, **kwargs: Any, ) -> List[Document]: """Return docs selected using the maximal marginal relevance.""" func = partial( self.max_marginal_relevance_search_by_vector, embedding, k=k, fetch_k=fetch_k, lambda_mult=lambda_mult, filter=filter, **kwargs, ) return await asyncio.get_event_loop().run_in_executor(None, func)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,507
VLLMOpenAI -- create() got an unexpected keyword argument 'api_key'
### System Info Python 3.9 langchain 0.0.336 openai 1.3.2 pandas 2.1.3 ### Who can help? @EYU ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction First of all, thank you for this great library ! Concerning the bug, I have a vllm openai server (0.2.1.post1) running locally started with the following command: ``` python -m vllm.entrypoints.openai.api_server --model ./zephyr-7b-beta --served-model-name zephyr-7b-beta ``` On the client side, I have this piece of code, slightly adapted from the documentation (only the model name changes). ```python from langchain.llms import VLLMOpenAI llm = VLLMOpenAI( openai_api_key="EMPTY", openai_api_base="http://localhost:8000/v1", model_name="zephyr-7b-beta", ) print(llm("Rome is")) ``` And I got the following error: ```text --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[19], line 6 1 llm = VLLMOpenAI( 2 openai_api_key="EMPTY", 3 openai_api_base="http://localhost:8000/v1", 4 model_name="zephyr-7b-beta", 5 ) ----> 6 llm("Rome is") File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:876, in BaseLLM.__call__(self, prompt, stop, callbacks, tags, metadata, **kwargs) 869 if not isinstance(prompt, str): 870 raise ValueError( 871 "Argument `prompt` is expected to be a string. Instead found " 872 f"{type(prompt)}. If you want to run the LLM on multiple prompts, use " 873 "`generate` instead." 874 ) 875 return ( --> 876 self.generate( 877 [prompt], 878 stop=stop, 879 callbacks=callbacks, 880 tags=tags, 881 metadata=metadata, 882 **kwargs, 883 ) 884 .generations[0][0] 885 .text 886 ) File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:656, in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, **kwargs) 641 raise ValueError( 642 "Asked to cache, but no cache found at `langchain.cache`." 643 ) 644 run_managers = [ 645 callback_manager.on_llm_start( 646 dumpd(self), (...) 654 ) 655 ] --> 656 output = self._generate_helper( 657 prompts, stop, run_managers, bool(new_arg_supported), **kwargs 658 ) 659 return output 660 if len(missing_prompts) > 0: File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:544, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs) 542 for run_manager in run_managers: 543 run_manager.on_llm_error(e) --> 544 raise e 545 flattened_outputs = output.flatten() 546 for manager, flattened_output in zip(run_managers, flattened_outputs): File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:531, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs) 521 def _generate_helper( 522 self, 523 prompts: List[str], (...) 527 **kwargs: Any, 528 ) -> LLMResult: 529 try: 530 output = ( --> 531 self._generate( 532 prompts, 533 stop=stop, 534 # TODO: support multiple run managers 535 run_manager=run_managers[0] if run_managers else None, 536 **kwargs, 537 ) 538 if new_arg_supported 539 else self._generate(prompts, stop=stop) 540 ) 541 except BaseException as e: 542 for run_manager in run_managers: File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/openai.py:454, in BaseOpenAI._generate(self, prompts, stop, run_manager, **kwargs) 442 choices.append( 443 { 444 "text": generation.text, (...) 451 } 452 ) 453 else: --> 454 response = completion_with_retry( 455 self, prompt=_prompts, run_manager=run_manager, **params 456 ) 457 if not isinstance(response, dict): 458 # V1 client returns the response in an PyDantic object instead of 459 # dict. For the transition period, we deep convert it to dict. 460 response = response.dict() File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/openai.py:114, in completion_with_retry(llm, run_manager, **kwargs) 112 """Use tenacity to retry the completion call.""" 113 if is_openai_v1(): --> 114 return llm.client.create(**kwargs) 116 retry_decorator = _create_retry_decorator(llm, run_manager=run_manager) 118 @retry_decorator 119 def _completion_with_retry(**kwargs: Any) -> Any: File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/openai/_utils/_utils.py:299, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs) 297 msg = f"Missing required argument: {quote(missing[0])}" 298 raise TypeError(msg) --> 299 return func(*args, **kwargs) TypeError: create() got an unexpected keyword argument 'api_key' ``` It seems that if I remove the line 158 from `langchain/llms/vllm.py`, the code is working. ### Expected behavior I expect a completion with no error.
https://github.com/langchain-ai/langchain/issues/13507
https://github.com/langchain-ai/langchain/pull/13517
6bc08266e0c9ca7841bb322259e69a9c0dd6a08d
69d39e2173fcb44cdcd334cb912acaf7b148dff6
"2023-11-17T08:56:07Z"
python
"2023-11-20T01:49:55Z"
libs/langchain/langchain/llms/vllm.py
from typing import Any, Dict, List, Optional from langchain.callbacks.manager import CallbackManagerForLLMRun from langchain.llms.base import BaseLLM from langchain.llms.openai import BaseOpenAI from langchain.pydantic_v1 import Field, root_validator from langchain.schema.output import Generation, LLMResult class VLLM(BaseLLM): """VLLM language model.""" model: str = "" """The name or path of a HuggingFace Transformers model.""" tensor_parallel_size: Optional[int] = 1 """The number of GPUs to use for distributed execution with tensor parallelism.""" trust_remote_code: Optional[bool] = False """Trust remote code (e.g., from HuggingFace) when downloading the model and tokenizer.""" n: int = 1 """Number of output sequences to return for the given prompt.""" best_of: Optional[int] = None """Number of output sequences that are generated from the prompt.""" presence_penalty: float = 0.0 """Float that penalizes new tokens based on whether they appear in the generated text so far"""
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,507
VLLMOpenAI -- create() got an unexpected keyword argument 'api_key'
### System Info Python 3.9 langchain 0.0.336 openai 1.3.2 pandas 2.1.3 ### Who can help? @EYU ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction First of all, thank you for this great library ! Concerning the bug, I have a vllm openai server (0.2.1.post1) running locally started with the following command: ``` python -m vllm.entrypoints.openai.api_server --model ./zephyr-7b-beta --served-model-name zephyr-7b-beta ``` On the client side, I have this piece of code, slightly adapted from the documentation (only the model name changes). ```python from langchain.llms import VLLMOpenAI llm = VLLMOpenAI( openai_api_key="EMPTY", openai_api_base="http://localhost:8000/v1", model_name="zephyr-7b-beta", ) print(llm("Rome is")) ``` And I got the following error: ```text --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[19], line 6 1 llm = VLLMOpenAI( 2 openai_api_key="EMPTY", 3 openai_api_base="http://localhost:8000/v1", 4 model_name="zephyr-7b-beta", 5 ) ----> 6 llm("Rome is") File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:876, in BaseLLM.__call__(self, prompt, stop, callbacks, tags, metadata, **kwargs) 869 if not isinstance(prompt, str): 870 raise ValueError( 871 "Argument `prompt` is expected to be a string. Instead found " 872 f"{type(prompt)}. If you want to run the LLM on multiple prompts, use " 873 "`generate` instead." 874 ) 875 return ( --> 876 self.generate( 877 [prompt], 878 stop=stop, 879 callbacks=callbacks, 880 tags=tags, 881 metadata=metadata, 882 **kwargs, 883 ) 884 .generations[0][0] 885 .text 886 ) File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:656, in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, **kwargs) 641 raise ValueError( 642 "Asked to cache, but no cache found at `langchain.cache`." 643 ) 644 run_managers = [ 645 callback_manager.on_llm_start( 646 dumpd(self), (...) 654 ) 655 ] --> 656 output = self._generate_helper( 657 prompts, stop, run_managers, bool(new_arg_supported), **kwargs 658 ) 659 return output 660 if len(missing_prompts) > 0: File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:544, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs) 542 for run_manager in run_managers: 543 run_manager.on_llm_error(e) --> 544 raise e 545 flattened_outputs = output.flatten() 546 for manager, flattened_output in zip(run_managers, flattened_outputs): File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:531, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs) 521 def _generate_helper( 522 self, 523 prompts: List[str], (...) 527 **kwargs: Any, 528 ) -> LLMResult: 529 try: 530 output = ( --> 531 self._generate( 532 prompts, 533 stop=stop, 534 # TODO: support multiple run managers 535 run_manager=run_managers[0] if run_managers else None, 536 **kwargs, 537 ) 538 if new_arg_supported 539 else self._generate(prompts, stop=stop) 540 ) 541 except BaseException as e: 542 for run_manager in run_managers: File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/openai.py:454, in BaseOpenAI._generate(self, prompts, stop, run_manager, **kwargs) 442 choices.append( 443 { 444 "text": generation.text, (...) 451 } 452 ) 453 else: --> 454 response = completion_with_retry( 455 self, prompt=_prompts, run_manager=run_manager, **params 456 ) 457 if not isinstance(response, dict): 458 # V1 client returns the response in an PyDantic object instead of 459 # dict. For the transition period, we deep convert it to dict. 460 response = response.dict() File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/openai.py:114, in completion_with_retry(llm, run_manager, **kwargs) 112 """Use tenacity to retry the completion call.""" 113 if is_openai_v1(): --> 114 return llm.client.create(**kwargs) 116 retry_decorator = _create_retry_decorator(llm, run_manager=run_manager) 118 @retry_decorator 119 def _completion_with_retry(**kwargs: Any) -> Any: File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/openai/_utils/_utils.py:299, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs) 297 msg = f"Missing required argument: {quote(missing[0])}" 298 raise TypeError(msg) --> 299 return func(*args, **kwargs) TypeError: create() got an unexpected keyword argument 'api_key' ``` It seems that if I remove the line 158 from `langchain/llms/vllm.py`, the code is working. ### Expected behavior I expect a completion with no error.
https://github.com/langchain-ai/langchain/issues/13507
https://github.com/langchain-ai/langchain/pull/13517
6bc08266e0c9ca7841bb322259e69a9c0dd6a08d
69d39e2173fcb44cdcd334cb912acaf7b148dff6
"2023-11-17T08:56:07Z"
python
"2023-11-20T01:49:55Z"
libs/langchain/langchain/llms/vllm.py
frequency_penalty: float = 0.0 """Float that penalizes new tokens based on their frequency in the generated text so far""" temperature: float = 1.0 """Float that controls the randomness of the sampling.""" top_p: float = 1.0 """Float that controls the cumulative probability of the top tokens to consider.""" top_k: int = -1 """Integer that controls the number of top tokens to consider.""" use_beam_search: bool = False """Whether to use beam search instead of sampling.""" stop: Optional[List[str]] = None """List of strings that stop the generation when they are generated.""" ignore_eos: bool = False """Whether to ignore the EOS token and continue generating tokens after the EOS token is generated.""" max_new_tokens: int = 512 """Maximum number of tokens to generate per output sequence.""" logprobs: Optional[int] = None """Number of log probabilities to return per output token.""" dtype: str = "auto" """The data type for the model weights and activations.""" download_dir: Optional[str] = None """Directory to download and load the weights. (Default to the default cache dir of huggingface)""" vllm_kwargs: Dict[str, Any] = Field(default_factory=dict) """Holds any model parameters valid for `vllm.LLM` call not explicitly specified.""" client: Any @root_validator() def validate_environment(cls, values: Dict) -> Dict:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,507
VLLMOpenAI -- create() got an unexpected keyword argument 'api_key'
### System Info Python 3.9 langchain 0.0.336 openai 1.3.2 pandas 2.1.3 ### Who can help? @EYU ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction First of all, thank you for this great library ! Concerning the bug, I have a vllm openai server (0.2.1.post1) running locally started with the following command: ``` python -m vllm.entrypoints.openai.api_server --model ./zephyr-7b-beta --served-model-name zephyr-7b-beta ``` On the client side, I have this piece of code, slightly adapted from the documentation (only the model name changes). ```python from langchain.llms import VLLMOpenAI llm = VLLMOpenAI( openai_api_key="EMPTY", openai_api_base="http://localhost:8000/v1", model_name="zephyr-7b-beta", ) print(llm("Rome is")) ``` And I got the following error: ```text --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[19], line 6 1 llm = VLLMOpenAI( 2 openai_api_key="EMPTY", 3 openai_api_base="http://localhost:8000/v1", 4 model_name="zephyr-7b-beta", 5 ) ----> 6 llm("Rome is") File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:876, in BaseLLM.__call__(self, prompt, stop, callbacks, tags, metadata, **kwargs) 869 if not isinstance(prompt, str): 870 raise ValueError( 871 "Argument `prompt` is expected to be a string. Instead found " 872 f"{type(prompt)}. If you want to run the LLM on multiple prompts, use " 873 "`generate` instead." 874 ) 875 return ( --> 876 self.generate( 877 [prompt], 878 stop=stop, 879 callbacks=callbacks, 880 tags=tags, 881 metadata=metadata, 882 **kwargs, 883 ) 884 .generations[0][0] 885 .text 886 ) File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:656, in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, **kwargs) 641 raise ValueError( 642 "Asked to cache, but no cache found at `langchain.cache`." 643 ) 644 run_managers = [ 645 callback_manager.on_llm_start( 646 dumpd(self), (...) 654 ) 655 ] --> 656 output = self._generate_helper( 657 prompts, stop, run_managers, bool(new_arg_supported), **kwargs 658 ) 659 return output 660 if len(missing_prompts) > 0: File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:544, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs) 542 for run_manager in run_managers: 543 run_manager.on_llm_error(e) --> 544 raise e 545 flattened_outputs = output.flatten() 546 for manager, flattened_output in zip(run_managers, flattened_outputs): File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:531, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs) 521 def _generate_helper( 522 self, 523 prompts: List[str], (...) 527 **kwargs: Any, 528 ) -> LLMResult: 529 try: 530 output = ( --> 531 self._generate( 532 prompts, 533 stop=stop, 534 # TODO: support multiple run managers 535 run_manager=run_managers[0] if run_managers else None, 536 **kwargs, 537 ) 538 if new_arg_supported 539 else self._generate(prompts, stop=stop) 540 ) 541 except BaseException as e: 542 for run_manager in run_managers: File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/openai.py:454, in BaseOpenAI._generate(self, prompts, stop, run_manager, **kwargs) 442 choices.append( 443 { 444 "text": generation.text, (...) 451 } 452 ) 453 else: --> 454 response = completion_with_retry( 455 self, prompt=_prompts, run_manager=run_manager, **params 456 ) 457 if not isinstance(response, dict): 458 # V1 client returns the response in an PyDantic object instead of 459 # dict. For the transition period, we deep convert it to dict. 460 response = response.dict() File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/openai.py:114, in completion_with_retry(llm, run_manager, **kwargs) 112 """Use tenacity to retry the completion call.""" 113 if is_openai_v1(): --> 114 return llm.client.create(**kwargs) 116 retry_decorator = _create_retry_decorator(llm, run_manager=run_manager) 118 @retry_decorator 119 def _completion_with_retry(**kwargs: Any) -> Any: File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/openai/_utils/_utils.py:299, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs) 297 msg = f"Missing required argument: {quote(missing[0])}" 298 raise TypeError(msg) --> 299 return func(*args, **kwargs) TypeError: create() got an unexpected keyword argument 'api_key' ``` It seems that if I remove the line 158 from `langchain/llms/vllm.py`, the code is working. ### Expected behavior I expect a completion with no error.
https://github.com/langchain-ai/langchain/issues/13507
https://github.com/langchain-ai/langchain/pull/13517
6bc08266e0c9ca7841bb322259e69a9c0dd6a08d
69d39e2173fcb44cdcd334cb912acaf7b148dff6
"2023-11-17T08:56:07Z"
python
"2023-11-20T01:49:55Z"
libs/langchain/langchain/llms/vllm.py
"""Validate that python package exists in environment.""" try: from vllm import LLM as VLLModel except ImportError: raise ImportError( "Could not import vllm python package. " "Please install it with `pip install vllm`." ) values["client"] = VLLModel( model=values["model"], tensor_parallel_size=values["tensor_parallel_size"], trust_remote_code=values["trust_remote_code"], dtype=values["dtype"], download_dir=values["download_dir"], **values["vllm_kwargs"], ) return values @property def _default_params(self) -> Dict[str, Any]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,507
VLLMOpenAI -- create() got an unexpected keyword argument 'api_key'
### System Info Python 3.9 langchain 0.0.336 openai 1.3.2 pandas 2.1.3 ### Who can help? @EYU ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction First of all, thank you for this great library ! Concerning the bug, I have a vllm openai server (0.2.1.post1) running locally started with the following command: ``` python -m vllm.entrypoints.openai.api_server --model ./zephyr-7b-beta --served-model-name zephyr-7b-beta ``` On the client side, I have this piece of code, slightly adapted from the documentation (only the model name changes). ```python from langchain.llms import VLLMOpenAI llm = VLLMOpenAI( openai_api_key="EMPTY", openai_api_base="http://localhost:8000/v1", model_name="zephyr-7b-beta", ) print(llm("Rome is")) ``` And I got the following error: ```text --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[19], line 6 1 llm = VLLMOpenAI( 2 openai_api_key="EMPTY", 3 openai_api_base="http://localhost:8000/v1", 4 model_name="zephyr-7b-beta", 5 ) ----> 6 llm("Rome is") File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:876, in BaseLLM.__call__(self, prompt, stop, callbacks, tags, metadata, **kwargs) 869 if not isinstance(prompt, str): 870 raise ValueError( 871 "Argument `prompt` is expected to be a string. Instead found " 872 f"{type(prompt)}. If you want to run the LLM on multiple prompts, use " 873 "`generate` instead." 874 ) 875 return ( --> 876 self.generate( 877 [prompt], 878 stop=stop, 879 callbacks=callbacks, 880 tags=tags, 881 metadata=metadata, 882 **kwargs, 883 ) 884 .generations[0][0] 885 .text 886 ) File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:656, in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, **kwargs) 641 raise ValueError( 642 "Asked to cache, but no cache found at `langchain.cache`." 643 ) 644 run_managers = [ 645 callback_manager.on_llm_start( 646 dumpd(self), (...) 654 ) 655 ] --> 656 output = self._generate_helper( 657 prompts, stop, run_managers, bool(new_arg_supported), **kwargs 658 ) 659 return output 660 if len(missing_prompts) > 0: File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:544, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs) 542 for run_manager in run_managers: 543 run_manager.on_llm_error(e) --> 544 raise e 545 flattened_outputs = output.flatten() 546 for manager, flattened_output in zip(run_managers, flattened_outputs): File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:531, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs) 521 def _generate_helper( 522 self, 523 prompts: List[str], (...) 527 **kwargs: Any, 528 ) -> LLMResult: 529 try: 530 output = ( --> 531 self._generate( 532 prompts, 533 stop=stop, 534 # TODO: support multiple run managers 535 run_manager=run_managers[0] if run_managers else None, 536 **kwargs, 537 ) 538 if new_arg_supported 539 else self._generate(prompts, stop=stop) 540 ) 541 except BaseException as e: 542 for run_manager in run_managers: File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/openai.py:454, in BaseOpenAI._generate(self, prompts, stop, run_manager, **kwargs) 442 choices.append( 443 { 444 "text": generation.text, (...) 451 } 452 ) 453 else: --> 454 response = completion_with_retry( 455 self, prompt=_prompts, run_manager=run_manager, **params 456 ) 457 if not isinstance(response, dict): 458 # V1 client returns the response in an PyDantic object instead of 459 # dict. For the transition period, we deep convert it to dict. 460 response = response.dict() File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/openai.py:114, in completion_with_retry(llm, run_manager, **kwargs) 112 """Use tenacity to retry the completion call.""" 113 if is_openai_v1(): --> 114 return llm.client.create(**kwargs) 116 retry_decorator = _create_retry_decorator(llm, run_manager=run_manager) 118 @retry_decorator 119 def _completion_with_retry(**kwargs: Any) -> Any: File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/openai/_utils/_utils.py:299, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs) 297 msg = f"Missing required argument: {quote(missing[0])}" 298 raise TypeError(msg) --> 299 return func(*args, **kwargs) TypeError: create() got an unexpected keyword argument 'api_key' ``` It seems that if I remove the line 158 from `langchain/llms/vllm.py`, the code is working. ### Expected behavior I expect a completion with no error.
https://github.com/langchain-ai/langchain/issues/13507
https://github.com/langchain-ai/langchain/pull/13517
6bc08266e0c9ca7841bb322259e69a9c0dd6a08d
69d39e2173fcb44cdcd334cb912acaf7b148dff6
"2023-11-17T08:56:07Z"
python
"2023-11-20T01:49:55Z"
libs/langchain/langchain/llms/vllm.py
"""Get the default parameters for calling vllm.""" return { "n": self.n, "best_of": self.best_of, "max_tokens": self.max_new_tokens, "top_k": self.top_k, "top_p": self.top_p, "temperature": self.temperature, "presence_penalty": self.presence_penalty, "frequency_penalty": self.frequency_penalty, "stop": self.stop, "ignore_eos": self.ignore_eos, "use_beam_search": self.use_beam_search, "logprobs": self.logprobs, } def _generate(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,507
VLLMOpenAI -- create() got an unexpected keyword argument 'api_key'
### System Info Python 3.9 langchain 0.0.336 openai 1.3.2 pandas 2.1.3 ### Who can help? @EYU ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction First of all, thank you for this great library ! Concerning the bug, I have a vllm openai server (0.2.1.post1) running locally started with the following command: ``` python -m vllm.entrypoints.openai.api_server --model ./zephyr-7b-beta --served-model-name zephyr-7b-beta ``` On the client side, I have this piece of code, slightly adapted from the documentation (only the model name changes). ```python from langchain.llms import VLLMOpenAI llm = VLLMOpenAI( openai_api_key="EMPTY", openai_api_base="http://localhost:8000/v1", model_name="zephyr-7b-beta", ) print(llm("Rome is")) ``` And I got the following error: ```text --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[19], line 6 1 llm = VLLMOpenAI( 2 openai_api_key="EMPTY", 3 openai_api_base="http://localhost:8000/v1", 4 model_name="zephyr-7b-beta", 5 ) ----> 6 llm("Rome is") File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:876, in BaseLLM.__call__(self, prompt, stop, callbacks, tags, metadata, **kwargs) 869 if not isinstance(prompt, str): 870 raise ValueError( 871 "Argument `prompt` is expected to be a string. Instead found " 872 f"{type(prompt)}. If you want to run the LLM on multiple prompts, use " 873 "`generate` instead." 874 ) 875 return ( --> 876 self.generate( 877 [prompt], 878 stop=stop, 879 callbacks=callbacks, 880 tags=tags, 881 metadata=metadata, 882 **kwargs, 883 ) 884 .generations[0][0] 885 .text 886 ) File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:656, in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, **kwargs) 641 raise ValueError( 642 "Asked to cache, but no cache found at `langchain.cache`." 643 ) 644 run_managers = [ 645 callback_manager.on_llm_start( 646 dumpd(self), (...) 654 ) 655 ] --> 656 output = self._generate_helper( 657 prompts, stop, run_managers, bool(new_arg_supported), **kwargs 658 ) 659 return output 660 if len(missing_prompts) > 0: File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:544, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs) 542 for run_manager in run_managers: 543 run_manager.on_llm_error(e) --> 544 raise e 545 flattened_outputs = output.flatten() 546 for manager, flattened_output in zip(run_managers, flattened_outputs): File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:531, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs) 521 def _generate_helper( 522 self, 523 prompts: List[str], (...) 527 **kwargs: Any, 528 ) -> LLMResult: 529 try: 530 output = ( --> 531 self._generate( 532 prompts, 533 stop=stop, 534 # TODO: support multiple run managers 535 run_manager=run_managers[0] if run_managers else None, 536 **kwargs, 537 ) 538 if new_arg_supported 539 else self._generate(prompts, stop=stop) 540 ) 541 except BaseException as e: 542 for run_manager in run_managers: File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/openai.py:454, in BaseOpenAI._generate(self, prompts, stop, run_manager, **kwargs) 442 choices.append( 443 { 444 "text": generation.text, (...) 451 } 452 ) 453 else: --> 454 response = completion_with_retry( 455 self, prompt=_prompts, run_manager=run_manager, **params 456 ) 457 if not isinstance(response, dict): 458 # V1 client returns the response in an PyDantic object instead of 459 # dict. For the transition period, we deep convert it to dict. 460 response = response.dict() File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/openai.py:114, in completion_with_retry(llm, run_manager, **kwargs) 112 """Use tenacity to retry the completion call.""" 113 if is_openai_v1(): --> 114 return llm.client.create(**kwargs) 116 retry_decorator = _create_retry_decorator(llm, run_manager=run_manager) 118 @retry_decorator 119 def _completion_with_retry(**kwargs: Any) -> Any: File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/openai/_utils/_utils.py:299, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs) 297 msg = f"Missing required argument: {quote(missing[0])}" 298 raise TypeError(msg) --> 299 return func(*args, **kwargs) TypeError: create() got an unexpected keyword argument 'api_key' ``` It seems that if I remove the line 158 from `langchain/llms/vllm.py`, the code is working. ### Expected behavior I expect a completion with no error.
https://github.com/langchain-ai/langchain/issues/13507
https://github.com/langchain-ai/langchain/pull/13517
6bc08266e0c9ca7841bb322259e69a9c0dd6a08d
69d39e2173fcb44cdcd334cb912acaf7b148dff6
"2023-11-17T08:56:07Z"
python
"2023-11-20T01:49:55Z"
libs/langchain/langchain/llms/vllm.py
self, prompts: List[str], stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any, ) -> LLMResult: """Run the LLM on the given prompt and input.""" from vllm import SamplingParams params = {**self._default_params, **kwargs, "stop": stop} sampling_params = SamplingParams(**params) outputs = self.client.generate(prompts, sampling_params) generations = [] for output in outputs: text = output.outputs[0].text generations.append([Generation(text=text)]) return LLMResult(generations=generations) @property def _llm_type(self) -> str: """Return type of llm.""" return "vllm" class VLLMOpenAI(BaseOpenAI):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,507
VLLMOpenAI -- create() got an unexpected keyword argument 'api_key'
### System Info Python 3.9 langchain 0.0.336 openai 1.3.2 pandas 2.1.3 ### Who can help? @EYU ### Information - [X] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [X] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction First of all, thank you for this great library ! Concerning the bug, I have a vllm openai server (0.2.1.post1) running locally started with the following command: ``` python -m vllm.entrypoints.openai.api_server --model ./zephyr-7b-beta --served-model-name zephyr-7b-beta ``` On the client side, I have this piece of code, slightly adapted from the documentation (only the model name changes). ```python from langchain.llms import VLLMOpenAI llm = VLLMOpenAI( openai_api_key="EMPTY", openai_api_base="http://localhost:8000/v1", model_name="zephyr-7b-beta", ) print(llm("Rome is")) ``` And I got the following error: ```text --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[19], line 6 1 llm = VLLMOpenAI( 2 openai_api_key="EMPTY", 3 openai_api_base="http://localhost:8000/v1", 4 model_name="zephyr-7b-beta", 5 ) ----> 6 llm("Rome is") File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:876, in BaseLLM.__call__(self, prompt, stop, callbacks, tags, metadata, **kwargs) 869 if not isinstance(prompt, str): 870 raise ValueError( 871 "Argument `prompt` is expected to be a string. Instead found " 872 f"{type(prompt)}. If you want to run the LLM on multiple prompts, use " 873 "`generate` instead." 874 ) 875 return ( --> 876 self.generate( 877 [prompt], 878 stop=stop, 879 callbacks=callbacks, 880 tags=tags, 881 metadata=metadata, 882 **kwargs, 883 ) 884 .generations[0][0] 885 .text 886 ) File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:656, in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, **kwargs) 641 raise ValueError( 642 "Asked to cache, but no cache found at `langchain.cache`." 643 ) 644 run_managers = [ 645 callback_manager.on_llm_start( 646 dumpd(self), (...) 654 ) 655 ] --> 656 output = self._generate_helper( 657 prompts, stop, run_managers, bool(new_arg_supported), **kwargs 658 ) 659 return output 660 if len(missing_prompts) > 0: File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:544, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs) 542 for run_manager in run_managers: 543 run_manager.on_llm_error(e) --> 544 raise e 545 flattened_outputs = output.flatten() 546 for manager, flattened_output in zip(run_managers, flattened_outputs): File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/base.py:531, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs) 521 def _generate_helper( 522 self, 523 prompts: List[str], (...) 527 **kwargs: Any, 528 ) -> LLMResult: 529 try: 530 output = ( --> 531 self._generate( 532 prompts, 533 stop=stop, 534 # TODO: support multiple run managers 535 run_manager=run_managers[0] if run_managers else None, 536 **kwargs, 537 ) 538 if new_arg_supported 539 else self._generate(prompts, stop=stop) 540 ) 541 except BaseException as e: 542 for run_manager in run_managers: File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/openai.py:454, in BaseOpenAI._generate(self, prompts, stop, run_manager, **kwargs) 442 choices.append( 443 { 444 "text": generation.text, (...) 451 } 452 ) 453 else: --> 454 response = completion_with_retry( 455 self, prompt=_prompts, run_manager=run_manager, **params 456 ) 457 if not isinstance(response, dict): 458 # V1 client returns the response in an PyDantic object instead of 459 # dict. For the transition period, we deep convert it to dict. 460 response = response.dict() File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/langchain/llms/openai.py:114, in completion_with_retry(llm, run_manager, **kwargs) 112 """Use tenacity to retry the completion call.""" 113 if is_openai_v1(): --> 114 return llm.client.create(**kwargs) 116 retry_decorator = _create_retry_decorator(llm, run_manager=run_manager) 118 @retry_decorator 119 def _completion_with_retry(**kwargs: Any) -> Any: File ~/softwares/miniconda3/envs/demo/lib/python3.9/site-packages/openai/_utils/_utils.py:299, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs) 297 msg = f"Missing required argument: {quote(missing[0])}" 298 raise TypeError(msg) --> 299 return func(*args, **kwargs) TypeError: create() got an unexpected keyword argument 'api_key' ``` It seems that if I remove the line 158 from `langchain/llms/vllm.py`, the code is working. ### Expected behavior I expect a completion with no error.
https://github.com/langchain-ai/langchain/issues/13507
https://github.com/langchain-ai/langchain/pull/13517
6bc08266e0c9ca7841bb322259e69a9c0dd6a08d
69d39e2173fcb44cdcd334cb912acaf7b148dff6
"2023-11-17T08:56:07Z"
python
"2023-11-20T01:49:55Z"
libs/langchain/langchain/llms/vllm.py
"""vLLM OpenAI-compatible API client""" @property def _invocation_params(self) -> Dict[str, Any]: """Get the parameters used to invoke the model.""" openai_creds: Dict[str, Any] = { "api_key": self.openai_api_key, "api_base": self.openai_api_base, } return { "model": self.model_name, **openai_creds, **self._default_params, "logit_bias": None, } @property def _llm_type(self) -> str: """Return type of llm.""" return "vllm-openai"
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/langchain/embeddings/azure_openai.py
"""Azure OpenAI embeddings wrapper.""" from __future__ import annotations import os import warnings from typing import Dict, Optional, Union from langchain.embeddings.openai import OpenAIEmbeddings from langchain.pydantic_v1 import Field, root_validator from langchain.utils import get_from_dict_or_env from langchain.utils.openai import is_openai_v1 class AzureOpenAIEmbeddings(OpenAIEmbeddings):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/langchain/embeddings/azure_openai.py
"""`Azure OpenAI` Embeddings API.""" azure_endpoint: Union[str, None] = None """Your Azure endpoint, including the resource. Automatically inferred from env var `AZURE_OPENAI_ENDPOINT` if not provided. Example: `https://example-resource.azure.openai.com/` """ deployment: Optional[str] = Field(default=None, alias="azure_deployment") """A model deployment. If given sets the base client URL to include `/deployments/{azure_deployment}`. Note: this means you won't be able to use non-deployment endpoints. """ openai_api_key: Union[str, None] = Field(default=None, alias="api_key") """Automatically inferred from env var `AZURE_OPENAI_API_KEY` if not provided.""" azure_ad_token: Union[str, None] = None """Your Azure Active Directory token. Automatically inferred from env var `AZURE_OPENAI_AD_TOKEN` if not provided. For more: https://www.microsoft.com/en-us/security/business/identity-access/microsoft-entra-id. """ azure_ad_token_provider: Union[str, None] = None """A function that returns an Azure Active Directory token. Will be invoked on every request. """ openai_api_version: Optional[str] = Field(default=None, alias="api_version") """Automatically inferred from env var `OPENAI_API_VERSION` if not provided.""" validate_base_url: bool = True @root_validator() def validate_environment(cls, values: Dict) -> Dict:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/langchain/embeddings/azure_openai.py
"""Validate that api key and python package exists in environment.""" values["openai_api_key"] = ( values["openai_api_key"] or os.getenv("AZURE_OPENAI_API_KEY") or os.getenv("OPENAI_API_KEY") ) values["openai_api_base"] = values["openai_api_base"] or os.getenv(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/langchain/embeddings/azure_openai.py
"OPENAI_API_BASE" ) values["openai_api_version"] = values["openai_api_version"] or os.getenv( "OPENAI_API_VERSION", default="2023-05-15" ) values["openai_api_type"] = get_from_dict_or_env( values, "openai_api_type", "OPENAI_API_TYPE", default="azure" ) values["openai_organization"] = ( values["openai_organization"] or os.getenv("OPENAI_ORG_ID") or os.getenv("OPENAI_ORGANIZATION") ) values["openai_proxy"] = get_from_dict_or_env( values, "openai_proxy", "OPENAI_PROXY", default="", ) values["azure_endpoint"] = values["azure_endpoint"] or os.getenv( "AZURE_OPENAI_ENDPOINT" ) values["azure_ad_token"] = values["azure_ad_token"] or os.getenv( "AZURE_OPENAI_AD_TOKEN" ) try: import openai except ImportError: raise ImportError( "Could not import openai python package. "
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/langchain/embeddings/azure_openai.py
"Please install it with `pip install openai`." ) if is_openai_v1(): openai_api_base = values["openai_api_base"] if openai_api_base and values["validate_base_url"]: if "/openai" not in openai_api_base: values["openai_api_base"] += "/openai" warnings.warn( "As of openai>=1.0.0, Azure endpoints should be specified via " f"the `azure_endpoint` param not `openai_api_base` " f"(or alias `base_url`). Updating `openai_api_base` from " f"{openai_api_base} to {values['openai_api_base']}." ) if values["deployment"]: warnings.warn( "As of openai>=1.0.0, if `deployment` (or alias " "`azure_deployment`) is specified then " "`openai_api_base` (or alias `base_url`) should not be. " "Instead use `deployment` (or alias `azure_deployment`) " "and `azure_endpoint`." ) if values["deployment"] not in values["openai_api_base"]: warnings.warn( "As of openai>=1.0.0, if `openai_api_base` " "(or alias `base_url`) is specified it is expected to be " "of the form " "https://example-resource.azure.openai.com/openai/deployments/example-deployment. " f"Updating {openai_api_base} to "
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/langchain/embeddings/azure_openai.py
f"{values['openai_api_base']}." ) values["openai_api_base"] += ( "/deployments/" + values["deployment"] ) values["deployment"] = None client_params = { "api_version": values["openai_api_version"], "azure_endpoint": values["azure_endpoint"], "azure_deployment": values["deployment"], "api_key": values["openai_api_key"], "azure_ad_token": values["azure_ad_token"], "azure_ad_token_provider": values["azure_ad_token_provider"], "organization": values["openai_organization"], "base_url": values["openai_api_base"], "timeout": values["request_timeout"], "max_retries": values["max_retries"], "default_headers": values["default_headers"], "default_query": values["default_query"], "http_client": values["http_client"], } values["client"] = openai.AzureOpenAI(**client_params).embeddings values["async_client"] = openai.AsyncAzureOpenAI(**client_params).embeddings else: values["client"] = openai.Embedding return values @property def _llm_type(self) -> str: return "azure-openai-chat"
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/langchain/embeddings/openai.py
from __future__ import annotations import logging import os import warnings from importlib.metadata import version from typing import ( Any, Callable, Dict, List, Literal, Mapping, Optional, Sequence, Set, Tuple, Union, cast, ) import numpy as np from packaging.version import Version, parse from tenacity import ( AsyncRetrying,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/langchain/embeddings/openai.py
before_sleep_log, retry, retry_if_exception_type, stop_after_attempt, wait_exponential, ) from langchain.pydantic_v1 import BaseModel, Extra, Field, root_validator from langchain.schema.embeddings import Embeddings from langchain.utils import get_from_dict_or_env, get_pydantic_field_names logger = logging.getLogger(__name__) def _create_retry_decorator(embeddings: OpenAIEmbeddings) -> Callable[[Any], Any]: import openai min_seconds = 4 max_seconds = 10 return retry( reraise=True, stop=stop_after_attempt(embeddings.max_retries), wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds), retry=( retry_if_exception_type(openai.error.Timeout) | retry_if_exception_type(openai.error.APIError) | retry_if_exception_type(openai.error.APIConnectionError) | retry_if_exception_type(openai.error.RateLimitError) | retry_if_exception_type(openai.error.ServiceUnavailableError) ), before_sleep=before_sleep_log(logger, logging.WARNING), ) def _async_retry_decorator(embeddings: OpenAIEmbeddings) -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/langchain/embeddings/openai.py
import openai min_seconds = 4 max_seconds = 10 async_retrying = AsyncRetrying( reraise=True, stop=stop_after_attempt(embeddings.max_retries), wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds), retry=( retry_if_exception_type(openai.error.Timeout) | retry_if_exception_type(openai.error.APIError) | retry_if_exception_type(openai.error.APIConnectionError) | retry_if_exception_type(openai.error.RateLimitError) | retry_if_exception_type(openai.error.ServiceUnavailableError) ), before_sleep=before_sleep_log(logger, logging.WARNING), ) def wrap(func: Callable) -> Callable: async def wrapped_f(*args: Any, **kwargs: Any) -> Callable: async for _ in async_retrying: return await func(*args, **kwargs) raise AssertionError("this is unreachable") return wrapped_f return wrap def _check_response(response: dict, skip_empty: bool = False) -> dict:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/langchain/embeddings/openai.py
if any(len(d["embedding"]) == 1 for d in response["data"]) and not skip_empty: import openai raise openai.error.APIError("OpenAI API returned an empty embedding") return response def embed_with_retry(embeddings: OpenAIEmbeddings, **kwargs: Any) -> Any: """Use tenacity to retry the embedding call.""" if _is_openai_v1(): return embeddings.client.create(**kwargs) retry_decorator = _create_retry_decorator(embeddings) @retry_decorator def _embed_with_retry(**kwargs: Any) -> Any: response = embeddings.client.create(**kwargs) return _check_response(response, skip_empty=embeddings.skip_empty) return _embed_with_retry(**kwargs) async def async_embed_with_retry(embeddings: OpenAIEmbeddings, **kwargs: Any) -> Any: """Use tenacity to retry the embedding call.""" if _is_openai_v1(): return await embeddings.async_client.create(**kwargs) @_async_retry_decorator(embeddings) async def _async_embed_with_retry(**kwargs: Any) -> Any: response = await embeddings.client.acreate(**kwargs) return _check_response(response, skip_empty=embeddings.skip_empty) return await _async_embed_with_retry(**kwargs) def _is_openai_v1() -> bool: _version = parse(version("openai")) return _version >= Version("1.0.0") class OpenAIEmbeddings(BaseModel, Embeddings):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/langchain/embeddings/openai.py
"""OpenAI embedding models. To use, you should have the ``openai`` python package installed, and the environment variable ``OPENAI_API_KEY`` set with your API key or pass it as a named parameter to the constructor. Example: .. code-block:: python from langchain.embeddings import OpenAIEmbeddings openai = OpenAIEmbeddings(openai_api_key="my-api-key") In order to use the library with Microsoft Azure endpoints, you need to set the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and OPENAI_API_VERSION. The OPENAI_API_TYPE must be set to 'azure' and the others correspond to the properties of your endpoint. In addition, the deployment name must be passed as the model parameter. Example: .. code-block:: python import os os.environ["OPENAI_API_TYPE"] = "azure" os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/" os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key" os.environ["OPENAI_API_VERSION"] = "2023-05-15" os.environ["OPENAI_PROXY"] = "http://your-corporate-proxy:8080" from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings( deployment="your-embeddings-deployment-name", model="your-embeddings-model-name", openai_api_base="https://your-endpoint.openai.azure.com/", openai_api_type="azure", ) text = "This is a test query."
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/langchain/embeddings/openai.py
query_result = embeddings.embed_query(text) """ client: Any = Field(default=None, exclude=True) async_client: Any = Field(default=None, exclude=True) model: str = "text-embedding-ada-002" deployment: Optional[str] = model openai_api_version: Optional[str] = Field(default=None, alias="api_version") """Automatically inferred from env var `OPENAI_API_VERSION` if not provided.""" openai_api_base: Optional[str] = Field(default=None, alias="base_url") """Base URL path for API requests, leave blank if not using a proxy or service emulator.""" openai_api_type: Optional[str] = None openai_proxy: Optional[str] = None embedding_ctx_length: int = 8191 """The maximum number of tokens to embed at once.""" openai_api_key: Optional[str] = Field(default=None, alias="api_key") """Automatically inferred from env var `OPENAI_API_KEY` if not provided.""" openai_organization: Optional[str] = Field(default=None, alias="organization") """Automatically inferred from env var `OPENAI_ORG_ID` if not provided.""" allowed_special: Union[Literal["all"], Set[str]] = set() disallowed_special: Union[Literal["all"], Set[str], Sequence[str]] = "all" chunk_size: int = 1000 """Maximum number of texts to embed in each batch""" max_retries: int = 2 """Maximum number of retries to make when generating."""
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/langchain/embeddings/openai.py
request_timeout: Optional[Union[float, Tuple[float, float], Any]] = Field( default=None, alias="timeout" ) """Timeout for requests to OpenAI completion API. Can be float, httpx.Timeout or None.""" headers: Any = None tiktoken_model_name: Optional[str] = None """The model name to pass to tiktoken when using this class. Tiktoken is used to count the number of tokens in documents to constrain them to be under a certain limit. By default, when set to None, this will be the same as the embedding model name. However, there are some cases where you may want to use this Embedding class with a model name not supported by tiktoken. This can include when using Azure embeddings or when using one of the many model providers that expose an OpenAI-like API but with different models. In those cases, in order to avoid erroring when tiktoken is called, you can specify a model name to use here.""" show_progress_bar: bool = False """Whether to show a progress bar when embedding.""" model_kwargs: Dict[str, Any] = Field(default_factory=dict) """Holds any model parameters valid for `create` call not explicitly specified.""" skip_empty: bool = False """Whether to skip empty strings when embedding or raise an error. Defaults to not skipping.""" default_headers: Union[Mapping[str, str], None] = None default_query: Union[Mapping[str, object], None] = None http_client: Union[Any, None] = None """Optional httpx.Client.""" class Config:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/langchain/embeddings/openai.py
"""Configuration for this pydantic object.""" extra = Extra.forbid allow_population_by_field_name = True @root_validator(pre=True) def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]: """Build extra kwargs from additional params that were passed in.""" all_required_field_names = get_pydantic_field_names(cls) extra = values.get("model_kwargs", {}) for field_name in list(values): if field_name in extra: raise ValueError(f"Found {field_name} supplied twice.") if field_name not in all_required_field_names: warnings.warn( f"""WARNING! {field_name} is not default parameter. {field_name} was transferred to model_kwargs. Please confirm that {field_name} is what you intended.""" ) extra[field_name] = values.pop(field_name) invalid_model_kwargs = all_required_field_names.intersection(extra.keys()) if invalid_model_kwargs: raise ValueError( f"Parameters {invalid_model_kwargs} should be specified explicitly. " f"Instead they were passed in as part of `model_kwargs` parameter." ) values["model_kwargs"] = extra return values @root_validator() def validate_environment(cls, values: Dict) -> Dict:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/langchain/embeddings/openai.py
"""Validate that api key and python package exists in environment.""" values["openai_api_key"] = get_from_dict_or_env( values, "openai_api_key", "OPENAI_API_KEY" ) values["openai_api_base"] = values["openai_api_base"] or os.getenv( "OPENAI_API_BASE" ) values["openai_api_type"] = get_from_dict_or_env( values, "openai_api_type", "OPENAI_API_TYPE", default="", ) values["openai_proxy"] = get_from_dict_or_env( values, "openai_proxy",
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/langchain/embeddings/openai.py
"OPENAI_PROXY", default="", ) if values["openai_api_type"] in ("azure", "azure_ad", "azuread"): default_api_version = "2023-05-15" values["chunk_size"] = max(values["chunk_size"], 16) else: default_api_version = "" values["openai_api_version"] = get_from_dict_or_env( values, "openai_api_version", "OPENAI_API_VERSION", default=default_api_version, ) values["openai_organization"] = ( values["openai_organization"] or os.getenv("OPENAI_ORG_ID") or os.getenv("OPENAI_ORGANIZATION") ) try: import openai except ImportError: raise ImportError( "Could not import openai python package. " "Please install it with `pip install openai`." )
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/langchain/embeddings/openai.py
else: if _is_openai_v1(): if values["openai_api_type"] in ("azure", "azure_ad", "azuread"): warnings.warn( "If you have openai>=1.0.0 installed and are using Azure, " "please use the `AzureOpenAIEmbeddings` class." ) client_params = { "api_key": values["openai_api_key"], "organization": values["openai_organization"], "base_url": values["openai_api_base"], "timeout": values["request_timeout"], "max_retries": values["max_retries"], "default_headers": values["default_headers"], "default_query": values["default_query"], "http_client": values["http_client"], } if not values.get("client"): values["client"] = openai.OpenAI(**client_params).embeddings if not values.get("async_client"): values["async_client"] = openai.AsyncOpenAI( **client_params ).embeddings elif not values.get("client"): values["client"] = openai.Embedding else: pass return values @property def _invocation_params(self) -> Dict[str, Any]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/langchain/embeddings/openai.py
if _is_openai_v1(): openai_args: Dict = {"model": self.model, **self.model_kwargs} else: openai_args = { "model": self.model, "request_timeout": self.request_timeout, "headers": self.headers, "api_key": self.openai_api_key, "organization": self.openai_organization, "api_base": self.openai_api_base, "api_type": self.openai_api_type, "api_version": self.openai_api_version, **self.model_kwargs, } if self.openai_api_type in ("azure", "azure_ad", "azuread"): openai_args["engine"] = self.deployment if self.openai_proxy: try: import openai except ImportError: raise ImportError( "Could not import openai python package. " "Please install it with `pip install openai`." )
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/langchain/embeddings/openai.py
openai.proxy = { "http": self.openai_proxy, "https": self.openai_proxy, } return openai_args def _get_len_safe_embeddings( self, texts: List[str], *, engine: str, chunk_size: Optional[int] = None ) -> List[List[float]]: embeddings: List[List[float]] = [[] for _ in range(len(texts))] try: import tiktoken except ImportError: raise ImportError( "Could not import tiktoken python package. " "This is needed in order to for OpenAIEmbeddings. " "Please install it with `pip install tiktoken`." ) tokens = [] indices = [] model_name = self.tiktoken_model_name or self.model try: encoding = tiktoken.encoding_for_model(model_name) except KeyError: logger.warning("Warning: model not found. Using cl100k_base encoding.") model = "cl100k_base" encoding = tiktoken.get_encoding(model) for i, text in enumerate(texts): if self.model.endswith("001"):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/langchain/embeddings/openai.py
text = text.replace("\n", " ") token = encoding.encode( text, allowed_special=self.allowed_special, disallowed_special=self.disallowed_special, ) for j in range(0, len(token), self.embedding_ctx_length): tokens.append(token[j : j + self.embedding_ctx_length]) indices.append(i) batched_embeddings: List[List[float]] = [] _chunk_size = chunk_size or self.chunk_size if self.show_progress_bar: try: from tqdm.auto import tqdm _iter = tqdm(range(0, len(tokens), _chunk_size)) except ImportError: _iter = range(0, len(tokens), _chunk_size) else: _iter = range(0, len(tokens), _chunk_size) for i in _iter: response = embed_with_retry( self, input=tokens[i : i + _chunk_size], **self._invocation_params, ) if not isinstance(response, dict): response = response.dict() batched_embeddings.extend(r["embedding"] for r in response["data"])
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/langchain/embeddings/openai.py
results: List[List[List[float]]] = [[] for _ in range(len(texts))] num_tokens_in_batch: List[List[int]] = [[] for _ in range(len(texts))] for i in range(len(indices)): if self.skip_empty and len(batched_embeddings[i]) == 1: continue results[indices[i]].append(batched_embeddings[i]) num_tokens_in_batch[indices[i]].append(len(tokens[i])) for i in range(len(texts)): _result = results[i] if len(_result) == 0: average_embedded = embed_with_retry( self, input="", **self._invocation_params, ) if not isinstance(average_embedded, dict): average_embedded = average_embedded.dict() average = average_embedded["data"][0]["embedding"] else: average = np.average(_result, axis=0, weights=num_tokens_in_batch[i]) embeddings[i] = (average / np.linalg.norm(average)).tolist() return embeddings async def _aget_len_safe_embeddings( self, texts: List[str], *, engine: str, chunk_size: Optional[int] = None ) -> List[List[float]]: embeddings: List[List[float]] = [[] for _ in range(len(texts))] try: import tiktoken
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/langchain/embeddings/openai.py
except ImportError: raise ImportError( "Could not import tiktoken python package. " "This is needed in order to for OpenAIEmbeddings. " "Please install it with `pip install tiktoken`." ) tokens = [] indices = [] model_name = self.tiktoken_model_name or self.model try: encoding = tiktoken.encoding_for_model(model_name) except KeyError: logger.warning("Warning: model not found. Using cl100k_base encoding.") model = "cl100k_base" encoding = tiktoken.get_encoding(model) for i, text in enumerate(texts): if self.model.endswith("001"): text = text.replace("\n", " ") token = encoding.encode( text, allowed_special=self.allowed_special, disallowed_special=self.disallowed_special, ) for j in range(0, len(token), self.embedding_ctx_length): tokens.append(token[j : j + self.embedding_ctx_length]) indices.append(i) batched_embeddings: List[List[float]] = [] _chunk_size = chunk_size or self.chunk_size
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/langchain/embeddings/openai.py
for i in range(0, len(tokens), _chunk_size): response = await async_embed_with_retry( self, input=tokens[i : i + _chunk_size], **self._invocation_params, ) if not isinstance(response, dict): response = response.dict() batched_embeddings.extend(r["embedding"] for r in response["data"]) results: List[List[List[float]]] = [[] for _ in range(len(texts))] num_tokens_in_batch: List[List[int]] = [[] for _ in range(len(texts))] for i in range(len(indices)): results[indices[i]].append(batched_embeddings[i]) num_tokens_in_batch[indices[i]].append(len(tokens[i])) for i in range(len(texts)): _result = results[i] if len(_result) == 0: average_embedded = embed_with_retry( self, input="", **self._invocation_params, ) if not isinstance(average_embedded, dict): average_embedded = average_embedded.dict() average = average_embedded["data"][0]["embedding"] else: average = np.average(_result, axis=0, weights=num_tokens_in_batch[i]) embeddings[i] = (average / np.linalg.norm(average)).tolist() return embeddings def embed_documents(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/langchain/embeddings/openai.py
self, texts: List[str], chunk_size: Optional[int] = 0 ) -> List[List[float]]: """Call out to OpenAI's embedding endpoint for embedding search docs. Args: texts: The list of texts to embed. chunk_size: The chunk size of embeddings. If None, will use the chunk size specified by the class. Returns: List of embeddings, one for each text. """ engine = cast(str, self.deployment) return self._get_len_safe_embeddings(texts, engine=engine) async def aembed_documents( self, texts: List[str], chunk_size: Optional[int] = 0 ) -> List[List[float]]: """Call out to OpenAI's embedding endpoint async for embedding search docs. Args: texts: The list of texts to embed. chunk_size: The chunk size of embeddings. If None, will use the chunk size specified by the class. Returns: List of embeddings, one for each text. """ engine = cast(str, self.deployment) return await self._aget_len_safe_embeddings(texts, engine=engine) def embed_query(self, text: str) -> List[float]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/langchain/embeddings/openai.py
"""Call out to OpenAI's embedding endpoint for embedding query text. Args: text: The text to embed. Returns: Embedding for the text. """ return self.embed_documents([text])[0] async def aembed_query(self, text: str) -> List[float]: """Call out to OpenAI's embedding endpoint async for embedding query text. Args: text: The text to embed. Returns: Embedding for the text. """ embeddings = await self.aembed_documents([text]) return embeddings[0]
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/tests/integration_tests/embeddings/test_azure_openai.py
"""Test openai embeddings.""" import os from typing import Any import numpy as np import pytest from langchain.embeddings import AzureOpenAIEmbeddings def _get_embeddings(**kwargs: Any) -> AzureOpenAIEmbeddings: return AzureOpenAIEmbeddings( openai_api_version=os.environ.get("AZURE_OPENAI_API_VERSION", ""), **kwargs, ) def test_azure_openai_embedding_documents() -> None: """Test openai embeddings.""" documents = ["foo bar"] embedding = _get_embeddings() output = embedding.embed_documents(documents) assert len(output) == 1 assert len(output[0]) == 1536 def test_azure_openai_embedding_documents_multiple() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/tests/integration_tests/embeddings/test_azure_openai.py
"""Test openai embeddings.""" documents = ["foo bar", "bar foo", "foo"] embedding = _get_embeddings(chunk_size=2) embedding.embedding_ctx_length = 8191 output = embedding.embed_documents(documents) assert len(output) == 3 assert len(output[0]) == 1536 assert len(output[1]) == 1536 assert len(output[2]) == 1536 @pytest.mark.asyncio async def test_azure_openai_embedding_documents_async_multiple() -> None: """Test openai embeddings.""" documents = ["foo bar", "bar foo", "foo"] embedding = _get_embeddings(chunk_size=2) embedding.embedding_ctx_length = 8191 output = await embedding.aembed_documents(documents) assert len(output) == 3 assert len(output[0]) == 1536 assert len(output[1]) == 1536 assert len(output[2]) == 1536 def test_azure_openai_embedding_query() -> None: """Test openai embeddings.""" document = "foo bar" embedding = _get_embeddings() output = embedding.embed_query(document) assert len(output) == 1536 @pytest.mark.asyncio async def test_azure_openai_embedding_async_query() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,539
New update broke embeddings models
### System Info LangChain version: 0.0.337 Python version: 3.10.13 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction db = Chroma.from_documents(docs, AzureOpenAIEmbeddings()) ### Expected behavior This worked on previous versions of LangChain using OpenAIEmbeddings(), but now I get this error BadRequestError: Error code: 400 - {'error': {'message': 'Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.', 'type': 'invalid_request_error', 'param': None, 'code': None}}
https://github.com/langchain-ai/langchain/issues/13539
https://github.com/langchain-ai/langchain/pull/13425
e53f59f01a2d5020e4a3248380d7a04891c8be1f
6bf9b2cb51f94872bb251ba22fe7e3aefb753d43
"2023-11-17T21:47:33Z"
python
"2023-11-20T02:34:51Z"
libs/langchain/tests/integration_tests/embeddings/test_azure_openai.py
"""Test openai embeddings.""" document = "foo bar" embedding = _get_embeddings() output = await embedding.aembed_query(document) assert len(output) == 1536 @pytest.mark.skip(reason="Unblock scheduled testing. TODO: fix.") def test_azure_openai_embedding_with_empty_string() -> None: """Test openai embeddings with empty string.""" import openai document = ["", "abc"] embedding = _get_embeddings() output = embedding.embed_documents(document) assert len(output) == 2 assert len(output[0]) == 1536 expected_output = openai.Embedding.create(input="", model="text-embedding-ada-002")[ "data" ][0]["embedding"] assert np.allclose(output[0], expected_output) assert len(output[1]) == 1536 def test_embed_documents_normalized() -> None: output = _get_embeddings().embed_documents(["foo walked to the market"]) assert np.isclose(np.linalg.norm(output[0]), 1.0) def test_embed_query_normalized() -> None: output = _get_embeddings().embed_query("foo walked to the market") assert np.isclose(np.linalg.norm(output), 1.0)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,814
Create retriever for Outline to ask questions on knowledge base
### Feature request A retriever for documents from [Outline](https://github.com/outline/outline). The API has a search endpoint which allows this to be possible: https://www.getoutline.com/developers#tag/Documents/paths/~1documents.search/post The implementation will be similar to the Wikipedia retriever: https://python.langchain.com/docs/integrations/retrievers/wikipedia ### Motivation Outline is an open source project that let's you create a knowledge base, like a wiki. Creating a retriever for Outline will let your team interact with your knowledge base using an LLM. ### Your contribution PR will be coming soon.
https://github.com/langchain-ai/langchain/issues/11814
https://github.com/langchain-ai/langchain/pull/13889
f2af82058f4904b20ae95c6d17d2b65666bf882a
935f78c9449c40473541666a8b0a0dc61873b0eb
"2023-10-15T01:58:24Z"
python
"2023-11-27T02:56:12Z"
libs/langchain/langchain/retrievers/__init__.py
"""**Retriever** class returns Documents given a text **query**. It is more general than a vector store. A retriever does not need to be able to store documents, only to return (or retrieve) it. Vector stores can be used as the backbone of a retriever, but there are other types of retrievers as well. **Class hierarchy:** .. code-block:: BaseRetriever --> <name>Retriever # Examples: ArxivRetriever, MergerRetriever **Main helpers:** .. code-block:: Document, Serializable, Callbacks, CallbackManagerForRetrieverRun, AsyncCallbackManagerForRetrieverRun """ from langchain.retrievers.arcee import ArceeRetriever from langchain.retrievers.arxiv import ArxivRetriever from langchain.retrievers.azure_cognitive_search import AzureCognitiveSearchRetriever from langchain.retrievers.bm25 import BM25Retriever from langchain.retrievers.chaindesk import ChaindeskRetriever
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,814
Create retriever for Outline to ask questions on knowledge base
### Feature request A retriever for documents from [Outline](https://github.com/outline/outline). The API has a search endpoint which allows this to be possible: https://www.getoutline.com/developers#tag/Documents/paths/~1documents.search/post The implementation will be similar to the Wikipedia retriever: https://python.langchain.com/docs/integrations/retrievers/wikipedia ### Motivation Outline is an open source project that let's you create a knowledge base, like a wiki. Creating a retriever for Outline will let your team interact with your knowledge base using an LLM. ### Your contribution PR will be coming soon.
https://github.com/langchain-ai/langchain/issues/11814
https://github.com/langchain-ai/langchain/pull/13889
f2af82058f4904b20ae95c6d17d2b65666bf882a
935f78c9449c40473541666a8b0a0dc61873b0eb
"2023-10-15T01:58:24Z"
python
"2023-11-27T02:56:12Z"
libs/langchain/langchain/retrievers/__init__.py
from langchain.retrievers.chatgpt_plugin_retriever import ChatGPTPluginRetriever from langchain.retrievers.cohere_rag_retriever import CohereRagRetriever from langchain.retrievers.contextual_compression import ContextualCompressionRetriever from langchain.retrievers.docarray import DocArrayRetriever from langchain.retrievers.elastic_search_bm25 import ElasticSearchBM25Retriever from langchain.retrievers.embedchain import EmbedchainRetriever from langchain.retrievers.ensemble import EnsembleRetriever from langchain.retrievers.google_cloud_documentai_warehouse import ( GoogleDocumentAIWarehouseRetriever, ) from langchain.retrievers.google_vertex_ai_search import ( GoogleCloudEnterpriseSearchRetriever, GoogleVertexAIMultiTurnSearchRetriever, GoogleVertexAISearchRetriever, ) from langchain.retrievers.kay import KayAiRetriever from langchain.retrievers.kendra import AmazonKendraRetriever from langchain.retrievers.knn import KNNRetriever from langchain.retrievers.llama_index import ( LlamaIndexGraphRetriever, LlamaIndexRetriever, ) from langchain.retrievers.merger_retriever import MergerRetriever from langchain.retrievers.metal import MetalRetriever from langchain.retrievers.milvus import MilvusRetriever from langchain.retrievers.multi_query import MultiQueryRetriever from langchain.retrievers.multi_vector import MultiVectorRetriever from langchain.retrievers.parent_document_retriever import ParentDocumentRetriever from langchain.retrievers.pinecone_hybrid_search import PineconeHybridSearchRetriever from langchain.retrievers.pubmed import PubMedRetriever
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,814
Create retriever for Outline to ask questions on knowledge base
### Feature request A retriever for documents from [Outline](https://github.com/outline/outline). The API has a search endpoint which allows this to be possible: https://www.getoutline.com/developers#tag/Documents/paths/~1documents.search/post The implementation will be similar to the Wikipedia retriever: https://python.langchain.com/docs/integrations/retrievers/wikipedia ### Motivation Outline is an open source project that let's you create a knowledge base, like a wiki. Creating a retriever for Outline will let your team interact with your knowledge base using an LLM. ### Your contribution PR will be coming soon.
https://github.com/langchain-ai/langchain/issues/11814
https://github.com/langchain-ai/langchain/pull/13889
f2af82058f4904b20ae95c6d17d2b65666bf882a
935f78c9449c40473541666a8b0a0dc61873b0eb
"2023-10-15T01:58:24Z"
python
"2023-11-27T02:56:12Z"
libs/langchain/langchain/retrievers/__init__.py
from langchain.retrievers.re_phraser import RePhraseQueryRetriever from langchain.retrievers.remote_retriever import RemoteLangChainRetriever from langchain.retrievers.self_query.base import SelfQueryRetriever from langchain.retrievers.svm import SVMRetriever from langchain.retrievers.tavily_search_api import TavilySearchAPIRetriever from langchain.retrievers.tfidf import TFIDFRetriever from langchain.retrievers.time_weighted_retriever import ( TimeWeightedVectorStoreRetriever, ) from langchain.retrievers.vespa_retriever import VespaRetriever from langchain.retrievers.weaviate_hybrid_search import WeaviateHybridSearchRetriever from langchain.retrievers.web_research import WebResearchRetriever from langchain.retrievers.wikipedia import WikipediaRetriever from langchain.retrievers.zep import ZepRetriever from langchain.retrievers.zilliz import ZillizRetriever __all__ = [ "AmazonKendraRetriever", "ArceeRetriever", "ArxivRetriever", "AzureCognitiveSearchRetriever", "ChatGPTPluginRetriever", "ContextualCompressionRetriever", "ChaindeskRetriever", "CohereRagRetriever", "ElasticSearchBM25Retriever", "EmbedchainRetriever", "GoogleDocumentAIWarehouseRetriever", "GoogleCloudEnterpriseSearchRetriever", "GoogleVertexAIMultiTurnSearchRetriever", "GoogleVertexAISearchRetriever",
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,814
Create retriever for Outline to ask questions on knowledge base
### Feature request A retriever for documents from [Outline](https://github.com/outline/outline). The API has a search endpoint which allows this to be possible: https://www.getoutline.com/developers#tag/Documents/paths/~1documents.search/post The implementation will be similar to the Wikipedia retriever: https://python.langchain.com/docs/integrations/retrievers/wikipedia ### Motivation Outline is an open source project that let's you create a knowledge base, like a wiki. Creating a retriever for Outline will let your team interact with your knowledge base using an LLM. ### Your contribution PR will be coming soon.
https://github.com/langchain-ai/langchain/issues/11814
https://github.com/langchain-ai/langchain/pull/13889
f2af82058f4904b20ae95c6d17d2b65666bf882a
935f78c9449c40473541666a8b0a0dc61873b0eb
"2023-10-15T01:58:24Z"
python
"2023-11-27T02:56:12Z"
libs/langchain/langchain/retrievers/__init__.py
"KayAiRetriever", "KNNRetriever", "LlamaIndexGraphRetriever", "LlamaIndexRetriever", "MergerRetriever", "MetalRetriever", "MilvusRetriever", "MultiQueryRetriever", "PineconeHybridSearchRetriever", "PubMedRetriever", "RemoteLangChainRetriever", "SVMRetriever", "SelfQueryRetriever", "TavilySearchAPIRetriever", "TFIDFRetriever", "BM25Retriever", "TimeWeightedVectorStoreRetriever", "VespaRetriever", "WeaviateHybridSearchRetriever", "WikipediaRetriever", "ZepRetriever", "ZillizRetriever", "DocArrayRetriever", "RePhraseQueryRetriever", "WebResearchRetriever", "EnsembleRetriever", "ParentDocumentRetriever", "MultiVectorRetriever", ]
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,814
Create retriever for Outline to ask questions on knowledge base
### Feature request A retriever for documents from [Outline](https://github.com/outline/outline). The API has a search endpoint which allows this to be possible: https://www.getoutline.com/developers#tag/Documents/paths/~1documents.search/post The implementation will be similar to the Wikipedia retriever: https://python.langchain.com/docs/integrations/retrievers/wikipedia ### Motivation Outline is an open source project that let's you create a knowledge base, like a wiki. Creating a retriever for Outline will let your team interact with your knowledge base using an LLM. ### Your contribution PR will be coming soon.
https://github.com/langchain-ai/langchain/issues/11814
https://github.com/langchain-ai/langchain/pull/13889
f2af82058f4904b20ae95c6d17d2b65666bf882a
935f78c9449c40473541666a8b0a0dc61873b0eb
"2023-10-15T01:58:24Z"
python
"2023-11-27T02:56:12Z"
libs/langchain/langchain/utilities/__init__.py
"""**Utilities** are the integrations with third-part systems and packages. Other LangChain classes use **Utilities** to interact with third-part systems and packages. """ from typing import Any from langchain.utilities.requests import Requests, RequestsWrapper, TextRequestsWrapper def _import_alpha_vantage() -> Any: from langchain.utilities.alpha_vantage import AlphaVantageAPIWrapper return AlphaVantageAPIWrapper def _import_apify() -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,814
Create retriever for Outline to ask questions on knowledge base
### Feature request A retriever for documents from [Outline](https://github.com/outline/outline). The API has a search endpoint which allows this to be possible: https://www.getoutline.com/developers#tag/Documents/paths/~1documents.search/post The implementation will be similar to the Wikipedia retriever: https://python.langchain.com/docs/integrations/retrievers/wikipedia ### Motivation Outline is an open source project that let's you create a knowledge base, like a wiki. Creating a retriever for Outline will let your team interact with your knowledge base using an LLM. ### Your contribution PR will be coming soon.
https://github.com/langchain-ai/langchain/issues/11814
https://github.com/langchain-ai/langchain/pull/13889
f2af82058f4904b20ae95c6d17d2b65666bf882a
935f78c9449c40473541666a8b0a0dc61873b0eb
"2023-10-15T01:58:24Z"
python
"2023-11-27T02:56:12Z"
libs/langchain/langchain/utilities/__init__.py
from langchain.utilities.apify import ApifyWrapper return ApifyWrapper def _import_arcee() -> Any: from langchain.utilities.arcee import ArceeWrapper return ArceeWrapper def _import_arxiv() -> Any: from langchain.utilities.arxiv import ArxivAPIWrapper return ArxivAPIWrapper def _import_awslambda() -> Any: from langchain.utilities.awslambda import LambdaWrapper return LambdaWrapper def _import_bibtex() -> Any: from langchain.utilities.bibtex import BibtexparserWrapper return BibtexparserWrapper def _import_bing_search() -> Any: from langchain.utilities.bing_search import BingSearchAPIWrapper return BingSearchAPIWrapper def _import_brave_search() -> Any: from langchain.utilities.brave_search import BraveSearchWrapper return BraveSearchWrapper def _import_duckduckgo_search() -> Any: from langchain.utilities.duckduckgo_search import DuckDuckGoSearchAPIWrapper return DuckDuckGoSearchAPIWrapper def _import_golden_query() -> Any: from langchain.utilities.golden_query import GoldenQueryAPIWrapper return GoldenQueryAPIWrapper def _import_google_places_api() -> Any: from langchain.utilities.google_places_api import GooglePlacesAPIWrapper return GooglePlacesAPIWrapper def _import_google_scholar() -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,814
Create retriever for Outline to ask questions on knowledge base
### Feature request A retriever for documents from [Outline](https://github.com/outline/outline). The API has a search endpoint which allows this to be possible: https://www.getoutline.com/developers#tag/Documents/paths/~1documents.search/post The implementation will be similar to the Wikipedia retriever: https://python.langchain.com/docs/integrations/retrievers/wikipedia ### Motivation Outline is an open source project that let's you create a knowledge base, like a wiki. Creating a retriever for Outline will let your team interact with your knowledge base using an LLM. ### Your contribution PR will be coming soon.
https://github.com/langchain-ai/langchain/issues/11814
https://github.com/langchain-ai/langchain/pull/13889
f2af82058f4904b20ae95c6d17d2b65666bf882a
935f78c9449c40473541666a8b0a0dc61873b0eb
"2023-10-15T01:58:24Z"
python
"2023-11-27T02:56:12Z"
libs/langchain/langchain/utilities/__init__.py
from langchain.utilities.google_scholar import GoogleScholarAPIWrapper return GoogleScholarAPIWrapper def _import_google_search() -> Any: from langchain.utilities.google_search import GoogleSearchAPIWrapper return GoogleSearchAPIWrapper def _import_google_serper() -> Any: from langchain.utilities.google_serper import GoogleSerperAPIWrapper return GoogleSerperAPIWrapper def _import_graphql() -> Any: from langchain.utilities.graphql import GraphQLAPIWrapper return GraphQLAPIWrapper def _import_jira() -> Any: from langchain.utilities.jira import JiraAPIWrapper return JiraAPIWrapper def _import_max_compute() -> Any: from langchain.utilities.max_compute import MaxComputeAPIWrapper return MaxComputeAPIWrapper def _import_metaphor_search() -> Any: from langchain.utilities.metaphor_search import MetaphorSearchAPIWrapper return MetaphorSearchAPIWrapper def _import_openweathermap() -> Any: from langchain.utilities.openweathermap import OpenWeatherMapAPIWrapper return OpenWeatherMapAPIWrapper def _import_portkey() -> Any: from langchain.utilities.portkey import Portkey return Portkey def _import_powerbi() -> Any: from langchain.utilities.powerbi import PowerBIDataset return PowerBIDataset def _import_pubmed() -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,814
Create retriever for Outline to ask questions on knowledge base
### Feature request A retriever for documents from [Outline](https://github.com/outline/outline). The API has a search endpoint which allows this to be possible: https://www.getoutline.com/developers#tag/Documents/paths/~1documents.search/post The implementation will be similar to the Wikipedia retriever: https://python.langchain.com/docs/integrations/retrievers/wikipedia ### Motivation Outline is an open source project that let's you create a knowledge base, like a wiki. Creating a retriever for Outline will let your team interact with your knowledge base using an LLM. ### Your contribution PR will be coming soon.
https://github.com/langchain-ai/langchain/issues/11814
https://github.com/langchain-ai/langchain/pull/13889
f2af82058f4904b20ae95c6d17d2b65666bf882a
935f78c9449c40473541666a8b0a0dc61873b0eb
"2023-10-15T01:58:24Z"
python
"2023-11-27T02:56:12Z"
libs/langchain/langchain/utilities/__init__.py
from langchain.utilities.pubmed import PubMedAPIWrapper return PubMedAPIWrapper def _import_python() -> Any: from langchain.utilities.python import PythonREPL return PythonREPL def _import_scenexplain() -> Any: from langchain.utilities.scenexplain import SceneXplainAPIWrapper return SceneXplainAPIWrapper def _import_searchapi() -> Any: from langchain.utilities.searchapi import SearchApiAPIWrapper return SearchApiAPIWrapper def _import_searx_search() -> Any: from langchain.utilities.searx_search import SearxSearchWrapper return SearxSearchWrapper def _import_serpapi() -> Any: from langchain.utilities.serpapi import SerpAPIWrapper return SerpAPIWrapper def _import_spark_sql() -> Any: from langchain.utilities.spark_sql import SparkSQL return SparkSQL def _import_sql_database() -> Any: from langchain.utilities.sql_database import SQLDatabase return SQLDatabase def _import_tensorflow_datasets() -> Any: from langchain.utilities.tensorflow_datasets import TensorflowDatasets return TensorflowDatasets def _import_twilio() -> Any: from langchain.utilities.twilio import TwilioAPIWrapper return TwilioAPIWrapper def _import_wikipedia() -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,814
Create retriever for Outline to ask questions on knowledge base
### Feature request A retriever for documents from [Outline](https://github.com/outline/outline). The API has a search endpoint which allows this to be possible: https://www.getoutline.com/developers#tag/Documents/paths/~1documents.search/post The implementation will be similar to the Wikipedia retriever: https://python.langchain.com/docs/integrations/retrievers/wikipedia ### Motivation Outline is an open source project that let's you create a knowledge base, like a wiki. Creating a retriever for Outline will let your team interact with your knowledge base using an LLM. ### Your contribution PR will be coming soon.
https://github.com/langchain-ai/langchain/issues/11814
https://github.com/langchain-ai/langchain/pull/13889
f2af82058f4904b20ae95c6d17d2b65666bf882a
935f78c9449c40473541666a8b0a0dc61873b0eb
"2023-10-15T01:58:24Z"
python
"2023-11-27T02:56:12Z"
libs/langchain/langchain/utilities/__init__.py
from langchain.utilities.wikipedia import WikipediaAPIWrapper return WikipediaAPIWrapper def _import_wolfram_alpha() -> Any: from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper return WolframAlphaAPIWrapper def _import_zapier() -> Any: from langchain.utilities.zapier import ZapierNLAWrapper return ZapierNLAWrapper def __getattr__(name: str) -> Any: if name == "AlphaVantageAPIWrapper": return _import_alpha_vantage() elif name == "ApifyWrapper": return _import_apify() elif name == "ArceeWrapper": return _import_arcee() elif name == "ArxivAPIWrapper": return _import_arxiv() elif name == "LambdaWrapper": return _import_awslambda() elif name == "BibtexparserWrapper": return _import_bibtex() elif name == "BingSearchAPIWrapper": return _import_bing_search() elif name == "BraveSearchWrapper": return _import_brave_search() elif name == "DuckDuckGoSearchAPIWrapper": return _import_duckduckgo_search() elif name == "GoldenQueryAPIWrapper": return _import_golden_query()
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,814
Create retriever for Outline to ask questions on knowledge base
### Feature request A retriever for documents from [Outline](https://github.com/outline/outline). The API has a search endpoint which allows this to be possible: https://www.getoutline.com/developers#tag/Documents/paths/~1documents.search/post The implementation will be similar to the Wikipedia retriever: https://python.langchain.com/docs/integrations/retrievers/wikipedia ### Motivation Outline is an open source project that let's you create a knowledge base, like a wiki. Creating a retriever for Outline will let your team interact with your knowledge base using an LLM. ### Your contribution PR will be coming soon.
https://github.com/langchain-ai/langchain/issues/11814
https://github.com/langchain-ai/langchain/pull/13889
f2af82058f4904b20ae95c6d17d2b65666bf882a
935f78c9449c40473541666a8b0a0dc61873b0eb
"2023-10-15T01:58:24Z"
python
"2023-11-27T02:56:12Z"
libs/langchain/langchain/utilities/__init__.py
elif name == "GoogleScholarAPIWrapper": return _import_google_scholar() elif name == "GooglePlacesAPIWrapper": return _import_google_places_api() elif name == "GoogleSearchAPIWrapper": return _import_google_search() elif name == "GoogleSerperAPIWrapper": return _import_google_serper() elif name == "GraphQLAPIWrapper": return _import_graphql() elif name == "JiraAPIWrapper": return _import_jira() elif name == "MaxComputeAPIWrapper": return _import_max_compute() elif name == "MetaphorSearchAPIWrapper": return _import_metaphor_search() elif name == "OpenWeatherMapAPIWrapper": return _import_openweathermap() elif name == "Portkey": return _import_portkey() elif name == "PowerBIDataset": return _import_powerbi() elif name == "PubMedAPIWrapper": return _import_pubmed() elif name == "PythonREPL": return _import_python() elif name == "SceneXplainAPIWrapper": return _import_scenexplain() elif name == "SearchApiAPIWrapper": return _import_searchapi()
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,814
Create retriever for Outline to ask questions on knowledge base
### Feature request A retriever for documents from [Outline](https://github.com/outline/outline). The API has a search endpoint which allows this to be possible: https://www.getoutline.com/developers#tag/Documents/paths/~1documents.search/post The implementation will be similar to the Wikipedia retriever: https://python.langchain.com/docs/integrations/retrievers/wikipedia ### Motivation Outline is an open source project that let's you create a knowledge base, like a wiki. Creating a retriever for Outline will let your team interact with your knowledge base using an LLM. ### Your contribution PR will be coming soon.
https://github.com/langchain-ai/langchain/issues/11814
https://github.com/langchain-ai/langchain/pull/13889
f2af82058f4904b20ae95c6d17d2b65666bf882a
935f78c9449c40473541666a8b0a0dc61873b0eb
"2023-10-15T01:58:24Z"
python
"2023-11-27T02:56:12Z"
libs/langchain/langchain/utilities/__init__.py
elif name == "SearxSearchWrapper": return _import_searx_search() elif name == "SerpAPIWrapper": return _import_serpapi() elif name == "SparkSQL": return _import_spark_sql() elif name == "SQLDatabase": return _import_sql_database() elif name == "TensorflowDatasets": return _import_tensorflow_datasets() elif name == "TwilioAPIWrapper": return _import_twilio() elif name == "WikipediaAPIWrapper": return _import_wikipedia() elif name == "WolframAlphaAPIWrapper": return _import_wolfram_alpha() elif name == "ZapierNLAWrapper": return _import_zapier() else: raise AttributeError(f"Could not find: {name}") __all__ = [ "AlphaVantageAPIWrapper", "ApifyWrapper", "ArceeWrapper", "ArxivAPIWrapper", "BibtexparserWrapper", "BingSearchAPIWrapper", "BraveSearchWrapper", "DuckDuckGoSearchAPIWrapper", "GoldenQueryAPIWrapper",
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,814
Create retriever for Outline to ask questions on knowledge base
### Feature request A retriever for documents from [Outline](https://github.com/outline/outline). The API has a search endpoint which allows this to be possible: https://www.getoutline.com/developers#tag/Documents/paths/~1documents.search/post The implementation will be similar to the Wikipedia retriever: https://python.langchain.com/docs/integrations/retrievers/wikipedia ### Motivation Outline is an open source project that let's you create a knowledge base, like a wiki. Creating a retriever for Outline will let your team interact with your knowledge base using an LLM. ### Your contribution PR will be coming soon.
https://github.com/langchain-ai/langchain/issues/11814
https://github.com/langchain-ai/langchain/pull/13889
f2af82058f4904b20ae95c6d17d2b65666bf882a
935f78c9449c40473541666a8b0a0dc61873b0eb
"2023-10-15T01:58:24Z"
python
"2023-11-27T02:56:12Z"
libs/langchain/langchain/utilities/__init__.py
"GooglePlacesAPIWrapper", "GoogleScholarAPIWrapper", "GoogleSearchAPIWrapper", "GoogleSerperAPIWrapper", "GraphQLAPIWrapper", "JiraAPIWrapper", "LambdaWrapper", "MaxComputeAPIWrapper", "MetaphorSearchAPIWrapper", "OpenWeatherMapAPIWrapper", "Portkey", "PowerBIDataset", "PubMedAPIWrapper", "PythonREPL", "Requests", "RequestsWrapper", "SQLDatabase", "SceneXplainAPIWrapper", "SearchApiAPIWrapper", "SearxSearchWrapper", "SerpAPIWrapper", "SparkSQL", "TensorflowDatasets", "TextRequestsWrapper", "TwilioAPIWrapper", "WikipediaAPIWrapper", "WolframAlphaAPIWrapper", "ZapierNLAWrapper", ]