status
stringclasses
1 value
repo_name
stringclasses
31 values
repo_url
stringclasses
31 values
issue_id
int64
1
104k
title
stringlengths
4
233
body
stringlengths
0
186k
issue_url
stringlengths
38
56
pull_url
stringlengths
37
54
before_fix_sha
stringlengths
40
40
after_fix_sha
stringlengths
40
40
report_datetime
unknown
language
stringclasses
5 values
commit_datetime
unknown
updated_file
stringlengths
7
188
chunk_content
stringlengths
1
1.03M
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
"""Wrapper around Redis vector database.""" from __future__ import annotations import logging import os import uuid from typing import ( TYPE_CHECKING, Any, Callable, Dict, Iterable, List, Mapping, Optional, Tuple, Type, Union, cast,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
) import numpy as np import yaml from langchain._api import deprecated from langchain.callbacks.manager import ( AsyncCallbackManagerForRetrieverRun, CallbackManagerForRetrieverRun, ) from langchain.docstore.document import Document from langchain.schema.embeddings import Embeddings from langchain.schema.vectorstore import VectorStore, VectorStoreRetriever from langchain.utilities.redis import ( _array_to_buffer, _buffer_to_array, check_redis_module_exist, get_client, ) from langchain.utils import get_from_dict_or_env from langchain.vectorstores.redis.constants import ( REDIS_REQUIRED_MODULES, REDIS_TAG_SEPARATOR, ) from langchain.vectorstores.utils import maximal_marginal_relevance logger = logging.getLogger(__name__) if TYPE_CHECKING: from redis.client import Redis as RedisType from redis.commands.search.query import Query from langchain.vectorstores.redis.filters import RedisFilterExpression from langchain.vectorstores.redis.schema import RedisModel def _redis_key(prefix: str) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
"""Redis key schema for a given prefix.""" return f"{prefix}:{uuid.uuid4().hex}" def _redis_prefix(index_name: str) -> str: """Redis key prefix for a given index.""" return f"doc:{index_name}" def _default_relevance_score(val: float) -> float: return 1 - val def check_index_exists(client: RedisType, index_name: str) -> bool: """Check if Redis index exists.""" try: client.ft(index_name).info() except: logger.info("Index does not exist") return False logger.info("Index already exists") return True class Redis(VectorStore): """Redis vector database. To use, you should have the ``redis`` python package installed and have a running Redis Enterprise or Redis-Stack server For production use cases, it is recommended to use Redis Enterprise as the scaling, performance, stability and availability is much better than Redis-Stack. For testing and prototyping, however, this is not required. Redis-Stack is available as a docker container the full vector search API available. .. code-block:: bash # to run redis stack in docker locally docker run -d -p 6379:6379 -p 8001:8001 redis/redis-stack:latest Once running, you can connect to the redis server with the following url schemas:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
- redis://<host>:<port> # simple connection - redis://<username>:<password>@<host>:<port> # connection with authentication - rediss://<host>:<port> # connection with SSL - rediss://<username>:<password>@<host>:<port> # connection with SSL and auth Examples: The following examples show various ways to use the Redis VectorStore with LangChain. For all the following examples assume we have the following imports: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings Initialize, create index, and load Documents .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings rds = Redis.from_documents( documents, # a list of Document objects from loaders or created embeddings, # an Embeddings object redis_url="redis://localhost:6379", ) Initialize, create index, and load Documents with metadata .. code-block:: python rds = Redis.from_texts( texts, # a list of strings metadata, # a list of metadata dicts embeddings, # an Embeddings object redis_url="redis://localhost:6379", ) Initialize, create index, and load Documents with metadata and return keys .. code-block:: python
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
rds, keys = Redis.from_texts_return_keys( texts, # a list of strings metadata, # a list of metadata dicts embeddings, # an Embeddings object redis_url="redis://localhost:6379", ) For use cases where the index needs to stay alive, you can initialize with an index name such that it's easier to reference later .. code-block:: python rds = Redis.from_texts( texts, # a list of strings metadata, # a list of metadata dicts embeddings, # an Embeddings object index_name="my-index", redis_url="redis://localhost:6379", ) Initialize and connect to an existing index (from above) .. code-block:: python rds = Redis.from_existing_index( embeddings, # an Embeddings object index_name="my-index", redis_url="redis://localhost:6379", ) Advanced examples: Custom vector schema can be supplied to change the way that Redis creates the underlying vector schema. This is useful for production use cases where you want to optimize the vector schema for your use case. ex. using HNSW instead of FLAT (knn) which is the default .. code-block:: python
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
vector_schema = { "algorithm": "HNSW" } rds = Redis.from_texts( texts, # a list of strings metadata, # a list of metadata dicts embeddings, # an Embeddings object vector_schema=vector_schema, redis_url="redis://localhost:6379", ) Custom index schema can be supplied to change the way that the metadata is indexed. This is useful for you would like to use the hybrid querying (filtering) capability of Redis. By default, this implementation will automatically generate the index schema according to the following rules: - All strings are indexed as text fields - All numbers are indexed as numeric fields - All lists of strings are indexed as tag fields (joined by langchain.vectorstores.redis.constants.REDIS_TAG_SEPARATOR) - All None values are not indexed but still stored in Redis these are not retrievable through the interface here, but the raw Redis client can be used to retrieve them. - All other types are not indexed To override these rules, you can pass in a custom index schema like the following .. code-block:: yaml tag: - name: credit_score text: - name: user - name: job
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
Typically, the ``credit_score`` field would be a text field since it's a string, however, we can override this behavior by specifying the field type as shown with the yaml config (can also be a dictionary) above and the code below. .. code-block:: python rds = Redis.from_texts( texts, # a list of strings metadata, # a list of metadata dicts embeddings, # an Embeddings object index_schema="path/to/index_schema.yaml", # can also be a dictionary redis_url="redis://localhost:6379", ) When connecting to an existing index where a custom schema has been applied, it's important to pass in the same schema to the ``from_existing_index`` method. Otherwise, the schema for newly added samples will be incorrect and metadata will not be returned. """ DEFAULT_VECTOR_SCHEMA = { "name": "content_vector", "algorithm": "FLAT", "dims": 1536, "distance_metric": "COSINE", "datatype": "FLOAT32", } def __init__( self, redis_url: str, index_name: str, embedding: Embeddings, index_schema: Optional[Union[Dict[str, str], str, os.PathLike]] = None, vector_schema: Optional[Dict[str, Union[str, int]]] = None,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
relevance_score_fn: Optional[Callable[[float], float]] = None, **kwargs: Any, ): """Initialize with necessary components.""" self._check_deprecated_kwargs(kwargs) try: import redis except ImportError as e: raise ImportError( "Could not import redis python package. " "Please install it with `pip install redis`." ) from e self.index_name = index_name self._embeddings = embedding try: redis_client = get_client(redis_url=redis_url, **kwargs) check_redis_module_exist(redis_client, REDIS_REQUIRED_MODULES) except ValueError as e: raise ValueError(f"Redis failed to connect: {e}") self.client = redis_client self.relevance_score_fn = relevance_score_fn self._schema = self._get_schema_with_defaults(index_schema, vector_schema) @property def embeddings(self) -> Optional[Embeddings]: """Access the query embedding object if available.""" return self._embeddings @classmethod def from_texts_return_keys(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
cls, texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, index_schema: Optional[Union[Dict[str, str], str, os.PathLike]] = None, vector_schema: Optional[Dict[str, Union[str, int]]] = None, **kwargs: Any, ) -> Tuple[Redis, List[str]]: """Create a Redis vectorstore from raw documents. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new Redis index if it doesn't already exist 3. Adds the documents to the newly created Redis index. 4. Returns the keys of the newly created documents once stored. This method will generate schema based on the metadata passed in if the `index_schema` is not defined. If the `index_schema` is defined, it will compare against the generated schema and warn if there are differences. If you are purposefully defining the schema for the metadata, then you can ignore that warning. To examine the schema options, initialize an instance of this class and print out the schema using the `Redis.schema`` property. This will include the content and content_vector classes which are always present in the langchain schema. Example: .. code-block:: python from langchain.vectorstores import Redis
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redis, keys = Redis.from_texts_return_keys( texts, embeddings, redis_url="redis://localhost:6379" ) Args: texts (List[str]): List of texts to add to the vectorstore. embedding (Embeddings): Embeddings to use for the vectorstore. metadatas (Optional[List[dict]], optional): Optional list of metadata dicts to add to the vectorstore. Defaults to None. index_name (Optional[str], optional): Optional name of the index to create or add to. Defaults to None. index_schema (Optional[Union[Dict[str, str], str, os.PathLike]], optional): Optional fields to index within the metadata. Overrides generated schema. Defaults to None. vector_schema (Optional[Dict[str, Union[str, int]]], optional): Optional vector schema to use. Defaults to None. **kwargs (Any): Additional keyword arguments to pass to the Redis client. Returns: Tuple[Redis, List[str]]: Tuple of the Redis instance and the keys of the newly created documents. Raises: ValueError: If the number of metadatas does not match the number of texts. """ try: import redis from langchain.vectorstores.redis.schema import read_schema
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
except ImportError as e: raise ImportError( "Could not import redis python package. " "Please install it with `pip install redis`." ) from e redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") if "redis_url" in kwargs: kwargs.pop("redis_url") if "generate" in kwargs: kwargs.pop("generate") keys = None if "keys" in kwargs: keys = kwargs.pop("keys") if not index_name: index_name = uuid.uuid4().hex if metadatas: if isinstance(metadatas, list) and len(metadatas) != len(texts): raise ValueError("Number of metadatas must match number of texts") if not (isinstance(metadatas, list) and isinstance(metadatas[0], dict)): raise ValueError("Metadatas must be a list of dicts") generated_schema = _generate_field_schema(metadatas[0]) if index_schema: user_schema = read_schema(index_schema)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
if user_schema != generated_schema: logger.warning( "`index_schema` does not match generated metadata schema.\n" + "If you meant to manually override the schema, please " + "ignore this message.\n" + f"index_schema: {user_schema}\n" + f"generated_schema: {generated_schema}\n" ) else: index_schema = generated_schema instance = cls( redis_url, index_name, embedding, index_schema=index_schema, vector_schema=vector_schema, **kwargs, ) embeddings = embedding.embed_documents(texts) instance._create_index(dim=len(embeddings[0])) keys = instance.add_texts(texts, metadatas, embeddings, keys=keys) return instance, keys @classmethod def from_texts(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
cls: Type[Redis], texts: List[str], embedding: Embeddings, metadatas: Optional[List[dict]] = None, index_name: Optional[str] = None, index_schema: Optional[Union[Dict[str, str], str, os.PathLike]] = None, vector_schema: Optional[Dict[str, Union[str, int]]] = None, **kwargs: Any, ) -> Redis: """Create a Redis vectorstore from a list of texts. This is a user-friendly interface that: 1. Embeds documents. 2. Creates a new Redis index if it doesn't already exist 3. Adds the documents to the newly created Redis index. This method will generate schema based on the metadata passed in if the `index_schema` is not defined. If the `index_schema` is defined, it will compare against the generated schema and warn if there are differences. If you are purposefully defining the schema for the metadata, then you can ignore that warning. To examine the schema options, initialize an instance of this class and print out the schema using the `Redis.schema`` property. This will include the content and content_vector classes which are always present in the langchain schema. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = RediSearch.from_texts(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
texts, embeddings, redis_url="redis://username:password@localhost:6379" ) Args: texts (List[str]): List of texts to add to the vectorstore. embedding (Embeddings): Embedding model class (i.e. OpenAIEmbeddings) for embedding queries. metadatas (Optional[List[dict]], optional): Optional list of metadata dicts to add to the vectorstore. Defaults to None. index_name (Optional[str], optional): Optional name of the index to create or add to. Defaults to None. index_schema (Optional[Union[Dict[str, str], str, os.PathLike]], optional): Optional fields to index within the metadata. Overrides generated schema. Defaults to None. vector_schema (Optional[Dict[str, Union[str, int]]], optional): Optional vector schema to use. Defaults to None. **kwargs (Any): Additional keyword arguments to pass to the Redis client. Returns: Redis: Redis VectorStore instance. Raises: ValueError: If the number of metadatas does not match the number of texts. ImportError: If the redis python package is not installed. """ instance, _ = cls.from_texts_return_keys( texts, embedding, metadatas=metadatas, index_name=index_name, index_schema=index_schema,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
vector_schema=vector_schema, **kwargs, ) return instance @classmethod def from_existing_index( cls, embedding: Embeddings, index_name: str, schema: Union[Dict[str, str], str, os.PathLike], **kwargs: Any, ) -> Redis: """Connect to an existing Redis index. Example: .. code-block:: python from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() redisearch = Redis.from_existing_index( embeddings, index_name="my-index", redis_url="redis://username:password@localhost:6379" ) Args: embedding (Embeddings): Embedding model class (i.e. OpenAIEmbeddings) for embedding queries. index_name (str): Name of the index to connect to. schema (Union[Dict[str, str], str, os.PathLike]): Schema of the index and the vector schema. Can be a dict, or path to yaml file **kwargs (Any): Additional keyword arguments to pass to the Redis client.
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
Returns: Redis: Redis VectorStore instance. Raises: ValueError: If the index does not exist. ImportError: If the redis python package is not installed. """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") try: if "redis_url" in kwargs: kwargs.pop("redis_url") client = get_client(redis_url=redis_url, **kwargs) check_redis_module_exist(client, REDIS_REQUIRED_MODULES) assert check_index_exists( client, index_name ), f"Index {index_name} does not exist" except Exception as e: raise ValueError(f"Redis failed to connect: {e}") return cls( redis_url, index_name, embedding, index_schema=schema, **kwargs, ) @property def schema(self) -> Dict[str, List[Any]]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
"""Return the schema of the index.""" return self._schema.as_dict() def write_schema(self, path: Union[str, os.PathLike]) -> None: """Write the schema to a yaml file.""" with open(path, "w+") as f: yaml.dump(self.schema, f) @staticmethod def delete( ids: Optional[List[str]] = None, **kwargs: Any, ) -> bool: """ Delete a Redis entry. Args: ids: List of ids (keys in redis) to delete. redis_url: Redis connection url. This should be passed in the kwargs or set as an environment variable: REDIS_URL. Returns: bool: Whether or not the deletions were successful. Raises: ValueError: If the redis python package is not installed. ValueError: If the ids (keys in redis) are not provided """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") if ids is None: raise ValueError("'ids' (keys)() were not provided.") try: import redis
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) try: if "redis_url" in kwargs: kwargs.pop("redis_url") client = get_client(redis_url=redis_url, **kwargs) except ValueError as e: raise ValueError(f"Your redis connected error: {e}") try: client.delete(*ids) logger.info("Entries deleted") return True except: return False @staticmethod def drop_index( index_name: str, delete_documents: bool, **kwargs: Any, ) -> bool: """ Drop a Redis search index. Args:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
index_name (str): Name of the index to drop. delete_documents (bool): Whether to drop the associated documents. Returns: bool: Whether or not the drop was successful. """ redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL") try: import redis except ImportError: raise ValueError( "Could not import redis python package. " "Please install it with `pip install redis`." ) try: if "redis_url" in kwargs: kwargs.pop("redis_url") client = get_client(redis_url=redis_url, **kwargs) except ValueError as e: raise ValueError(f"Your redis connected error: {e}") try: client.ft(index_name).dropindex(delete_documents) logger.info("Drop index") return True except: return False def add_texts(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
self, texts: Iterable[str], metadatas: Optional[List[dict]] = None, embeddings: Optional[List[List[float]]] = None, batch_size: int = 1000, clean_metadata: bool = True, **kwargs: Any, ) -> List[str]: """Add more texts to the vectorstore. Args: texts (Iterable[str]): Iterable of strings/text to add to the vectorstore. metadatas (Optional[List[dict]], optional): Optional list of metadatas. Defaults to None. embeddings (Optional[List[List[float]]], optional): Optional pre-generated embeddings. Defaults to None. keys (List[str]) or ids (List[str]): Identifiers of entries. Defaults to None. batch_size (int, optional): Batch size to use for writes. Defaults to 1000. Returns: List[str]: List of ids added to the vectorstore """ ids = [] prefix = _redis_prefix(self.index_name) keys_or_ids = kwargs.get("keys", kwargs.get("ids")) if metadatas: if isinstance(metadatas, list) and len(metadatas) != len(texts): raise ValueError("Number of metadatas must match number of texts")
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
if not (isinstance(metadatas, list) and isinstance(metadatas[0], dict)): raise ValueError("Metadatas must be a list of dicts") pipeline = self.client.pipeline(transaction=False) for i, text in enumerate(texts): key = keys_or_ids[i] if keys_or_ids else _redis_key(prefix) metadata = metadatas[i] if metadatas else {} metadata = _prepare_metadata(metadata) if clean_metadata else metadata embedding = ( embeddings[i] if embeddings else self._embeddings.embed_query(text) ) pipeline.hset( key, mapping={ self._schema.content_key: text, self._schema.content_vector_key: _array_to_buffer( embedding, self._schema.vector_dtype ), **metadata, }, ) ids.append(key) if i % batch_size == 0: pipeline.execute() pipeline.execute() return ids def as_retriever(self, **kwargs: Any) -> RedisVectorStoreRetriever:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
tags = kwargs.pop("tags", None) or [] tags.extend(self._get_retriever_tags()) return RedisVectorStoreRetriever(vectorstore=self, **kwargs, tags=tags) @deprecated("0.0.272", alternative="similarity_search(distance_threshold=0.1)") def similarity_search_limit_score( self, query: str, k: int = 4, score_threshold: float = 0.2, **kwargs: Any ) -> List[Document]: """ Returns the most similar indexed documents to the query text within the score_threshold range. Deprecated: Use similarity_search with distance_threshold instead. Args: query (str): The query text for which to find similar documents. k (int): The number of documents to return. Default is 4. score_threshold (float): The minimum matching *distance* required for a document to be considered a match. Defaults to 0.2. Returns: List[Document]: A list of documents that are most similar to the query text including the match score for each document. Note: If there are no documents that satisfy the score_threshold value, an empty list is returned. """ return self.similarity_search( query, k=k, distance_threshold=score_threshold, **kwargs ) def similarity_search_with_score(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
self, query: str, k: int = 4, filter: Optional[RedisFilterExpression] = None, return_metadata: bool = True, **kwargs: Any, ) -> List[Tuple[Document, float]]: """Run similarity search with **vector distance**. The "scores" returned from this function are the raw vector distances from the query vector. For similarity scores, use ``similarity_search_with_relevance_scores``. Args: query (str): The query text for which to find similar documents. k (int): The number of documents to return. Default is 4. filter (RedisFilterExpression, optional): Optional metadata filter. Defaults to None. return_metadata (bool, optional): Whether to return metadata. Defaults to True. Returns: List[Tuple[Document, float]]: A list of documents that are most similar to the query with the distance for each document. """ try:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
import redis except ImportError as e: raise ImportError( "Could not import redis python package. " "Please install it with `pip install redis`." ) from e if "score_threshold" in kwargs: logger.warning( "score_threshold is deprecated. Use distance_threshold instead." + "score_threshold should only be used in " + "similarity_search_with_relevance_scores." + "score_threshold will be removed in a future release.", ) query_embedding = self._embeddings.embed_query(query) redis_query, params_dict = self._prepare_query( query_embedding, k=k, filter=filter, with_metadata=return_metadata, with_distance=True, **kwargs, ) try: results = self.client.ft(self.index_name).search(redis_query, params_dict) except redis.exceptions.ResponseError as e: if str(e).split(" ")[0] == "Syntax": raise ValueError(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
"Query failed with syntax error. " + "This is likely due to malformation of " + "filter, vector, or query argument" ) from e raise e docs_with_scores: List[Tuple[Document, float]] = [] for result in results.docs: metadata = {} if return_metadata: metadata = {"id": result.id} metadata.update(self._collect_metadata(result)) doc = Document(page_content=result.content, metadata=metadata) distance = self._calculate_fp_distance(result.distance) docs_with_scores.append((doc, distance)) return docs_with_scores def similarity_search( self, query: str, k: int = 4, filter: Optional[RedisFilterExpression] = None, return_metadata: bool = True, distance_threshold: Optional[float] = None, **kwargs: Any, ) -> List[Document]: """Run similarity search Args: query (str): The query text for which to find similar documents. k (int): The number of documents to return. Default is 4. filter (RedisFilterExpression, optional): Optional metadata filter.
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
Defaults to None. return_metadata (bool, optional): Whether to return metadata. Defaults to True. distance_threshold (Optional[float], optional): Maximum vector distance between selected documents and the query vector. Defaults to None. Returns: List[Document]: A list of documents that are most similar to the query text. """ query_embedding = self._embeddings.embed_query(query) return self.similarity_search_by_vector( query_embedding, k=k, filter=filter, return_metadata=return_metadata, distance_threshold=distance_threshold, **kwargs, ) def similarity_search_by_vector( self, embedding: List[float], k: int = 4, filter: Optional[RedisFilterExpression] = None, return_metadata: bool = True, distance_threshold: Optional[float] = None, **kwargs: Any, ) -> List[Document]: """Run similarity search between a query vector and the indexed vectors. Args: embedding (List[float]): The query vector for which to find similar
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
documents. k (int): The number of documents to return. Default is 4. filter (RedisFilterExpression, optional): Optional metadata filter. Defaults to None. return_metadata (bool, optional): Whether to return metadata. Defaults to True. distance_threshold (Optional[float], optional): Maximum vector distance between selected documents and the query vector. Defaults to None. Returns: List[Document]: A list of documents that are most similar to the query text. """ try: import redis except ImportError as e: raise ImportError( "Could not import redis python package. " "Please install it with `pip install redis`." ) from e if "score_threshold" in kwargs: logger.warning( "score_threshold is deprecated. Use distance_threshold instead." + "score_threshold should only be used in " + "similarity_search_with_relevance_scores." + "score_threshold will be removed in a future release.", ) redis_query, params_dict = self._prepare_query( embedding, k=k, filter=filter,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
distance_threshold=distance_threshold, with_metadata=return_metadata, with_distance=False, ) try: results = self.client.ft(self.index_name).search(redis_query, params_dict) except redis.exceptions.ResponseError as e: if str(e).split(" ")[0] == "Syntax": raise ValueError( "Query failed with syntax error. " + "This is likely due to malformation of " + "filter, vector, or query argument" ) from e raise e docs = [] for result in results.docs: metadata = {} if return_metadata: metadata = {"id": result.id} metadata.update(self._collect_metadata(result)) content_key = self._schema.content_key docs.append( Document(page_content=getattr(result, content_key), metadata=metadata) ) return docs def max_marginal_relevance_search(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
self, query: str, k: int = 4, fetch_k: int = 20, lambda_mult: float = 0.5, filter: Optional[RedisFilterExpression] = None, return_metadata: bool = True, distance_threshold: Optional[float] = None, **kwargs: Any, ) -> List[Document]: """Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Args: query (str): Text to look up documents similar to. k (int): Number of Documents to return. Defaults to 4. fetch_k (int): Number of Documents to fetch to pass to MMR algorithm. lambda_mult (float): Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. filter (RedisFilterExpression, optional): Optional metadata filter. Defaults to None. return_metadata (bool, optional): Whether to return metadata. Defaults to True. distance_threshold (Optional[float], optional): Maximum vector distance between selected documents and the query vector. Defaults to None. Returns: List[Document]: A list of Documents selected by maximal marginal relevance. """
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
query_embedding = self._embeddings.embed_query(query) prefetch_docs = self.similarity_search_by_vector( query_embedding, k=fetch_k, filter=filter, return_metadata=return_metadata, distance_threshold=distance_threshold, **kwargs, ) prefetch_ids = [doc.metadata["id"] for doc in prefetch_docs] prefetch_embeddings = [ _buffer_to_array( cast( bytes, self.client.hget(prefetch_id, self._schema.content_vector_key), ), dtype=self._schema.vector_dtype, ) for prefetch_id in prefetch_ids ] selected_indices = maximal_marginal_relevance( np.array(query_embedding), prefetch_embeddings, lambda_mult=lambda_mult, k=k ) selected_docs = [prefetch_docs[i] for i in selected_indices] return selected_docs def _collect_metadata(self, result: "Document") -> Dict[str, Any]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
"""Collect metadata from Redis. Method ensures that there isn't a mismatch between the metadata and the index schema passed to this class by the user or generated by this class. Args: result (Document): redis.commands.search.Document object returned from Redis. Returns: Dict[str, Any]: Collected metadata. """ meta = {} for key in self._schema.metadata_keys: try: meta[key] = getattr(result, key) except AttributeError: logger.warning( f"Metadata key {key} not found in metadata. " + "Setting to None. \n" + "Metadata fields defined for this instance: " + f"{self._schema.metadata_keys}" ) meta[key] = None return meta def _prepare_query(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
self, query_embedding: List[float], k: int = 4, filter: Optional[RedisFilterExpression] = None, distance_threshold: Optional[float] = None, with_metadata: bool = True, with_distance: bool = False, ) -> Tuple["Query", Dict[str, Any]]: params_dict: Dict[str, Union[str, bytes, float]] = { "vector": _array_to_buffer(query_embedding, self._schema.vector_dtype), } return_fields = [self._schema.content_key] if with_distance: return_fields.append("distance") if with_metadata: return_fields.extend(self._schema.metadata_keys) if distance_threshold: params_dict["distance_threshold"] = distance_threshold return ( self._prepare_range_query( k, filter=filter, return_fields=return_fields ), params_dict, ) return (
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
self._prepare_vector_query(k, filter=filter, return_fields=return_fields), params_dict, ) def _prepare_range_query( self, k: int, filter: Optional[RedisFilterExpression] = None, return_fields: Optional[List[str]] = None, ) -> "Query": try: from redis.commands.search.query import Query except ImportError as e: raise ImportError( "Could not import redis python package. " "Please install it with `pip install redis`." ) from e return_fields = return_fields or [] vector_key = self._schema.content_vector_key base_query = f"@{vector_key}:[VECTOR_RANGE $distance_threshold $vector]" if filter: base_query = "(" + base_query + " " + str(filter) + ")" query_string = base_query + "=>{$yield_distance_as: distance}" return ( Query(query_string) .return_fields(*return_fields) .sort_by("distance") .paging(0, k) .dialect(2) ) def _prepare_vector_query(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
self, k: int, filter: Optional[RedisFilterExpression] = None, return_fields: Optional[List[str]] = None, ) -> "Query": """Prepare query for vector search. Args: k: Number of results to return. filter: Optional metadata filter. Returns: query: Query object. """ try: from redis.commands.search.query import Query
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
except ImportError as e: raise ImportError( "Could not import redis python package. " "Please install it with `pip install redis`." ) from e return_fields = return_fields or [] query_prefix = "*" if filter: query_prefix = f"{str(filter)}" vector_key = self._schema.content_vector_key base_query = f"({query_prefix})=>[KNN {k} @{vector_key} $vector AS distance]" query = ( Query(base_query) .return_fields(*return_fields) .sort_by("distance") .paging(0, k) .dialect(2) ) return query def _get_schema_with_defaults( self, index_schema: Optional[Union[Dict[str, str], str, os.PathLike]] = None, vector_schema: Optional[Dict[str, Union[str, int]]] = None, ) -> "RedisModel": from langchain.vectorstores.redis.schema import RedisModel, read_schema schema = RedisModel() if index_schema:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
schema_values = read_schema(index_schema) schema = RedisModel(**schema_values) schema.add_content_field() try: schema.content_vector if vector_schema: logger.warning( "`vector_schema` is ignored since content_vector is " + "overridden in `index_schema`." ) except ValueError: vector_field = self.DEFAULT_VECTOR_SCHEMA.copy() if vector_schema: vector_field.update(vector_schema) schema.add_vector_field(vector_field) return schema def _create_index(self, dim: int = 1536) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
try: from redis.commands.search.indexDefinition import ( IndexDefinition, IndexType, ) except ImportError: raise ImportError( "Could not import redis python package. " "Please install it with `pip install redis`." ) self._schema.content_vector.dims = dim if not check_index_exists(self.client, self.index_name): prefix = _redis_prefix(self.index_name) self.client.ft(self.index_name).create_index( fields=self._schema.get_fields(), definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH), ) def _calculate_fp_distance(self, distance: str) -> float:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
"""Calculate the distance based on the vector datatype Two datatypes supported: - FLOAT32 - FLOAT64 if it's FLOAT32, we need to round the distance to 4 decimal places otherwise, round to 7 decimal places. """ if self._schema.content_vector.datatype == "FLOAT32": return round(float(distance), 4) return round(float(distance), 7) def _check_deprecated_kwargs(self, kwargs: Mapping[str, Any]) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
"""Check for deprecated kwargs.""" deprecated_kwargs = { "redis_host": "redis_url", "redis_port": "redis_url", "redis_password": "redis_url", "content_key": "index_schema", "vector_key": "vector_schema", "distance_metric": "vector_schema", } for key, value in kwargs.items(): if key in deprecated_kwargs: raise ValueError( f"Keyword argument '{key}' is deprecated. " f"Please use '{deprecated_kwargs[key]}' instead." ) def _select_relevance_score_fn(self) -> Callable[[float], float]: if self.relevance_score_fn: return self.relevance_score_fn metric_map = { "COSINE": self._cosine_relevance_score_fn, "IP": self._max_inner_product_relevance_score_fn, "L2": self._euclidean_relevance_score_fn, } try: return metric_map[self._schema.content_vector.distance_metric] except KeyError: return _default_relevance_score def _generate_field_schema(data: Dict[str, Any]) -> Dict[str, Any]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
""" Generate a schema for the search index in Redis based on the input metadata. Given a dictionary of metadata, this function categorizes each metadata field into one of the three categories: - text: The field contains textual data. - numeric: The field contains numeric data (either integer or float). - tag: The field contains list of tags (strings). Args data (Dict[str, Any]): A dictionary where keys are metadata field names and values are the metadata values. Returns: Dict[str, Any]: A dictionary with three keys "text", "numeric", and "tag". Each key maps to a list of fields that belong to that category. Raises: ValueError: If a metadata field cannot be categorized into any of the three known types. """ result: Dict[str, Any] = { "text": [], "numeric": [], "tag": [], } for key, value in data.items(): try:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
int(value) result["numeric"].append({"name": key}) continue except (ValueError, TypeError): pass if value is None: continue if isinstance(value, (list, tuple)): if not value or isinstance(value[0], str): result["tag"].append({"name": key}) else: name = type(value[0]).__name__ raise ValueError( f"List/tuple values should contain strings: '{key}': {name}" ) continue if isinstance(value, str): result["text"].append({"name": key}) continue name = type(value).__name__ raise ValueError( "Could not generate Redis index field type mapping " + f"for metadata: '{key}': {name}" ) return result def _prepare_metadata(metadata: Dict[str, Any]) -> Dict[str, Any]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
""" Prepare metadata for indexing in Redis by sanitizing its values. - String, integer, and float values remain unchanged. - None or empty values are replaced with empty strings. - Lists/tuples of strings are joined into a single string with a comma separator. Args: metadata (Dict[str, Any]): A dictionary where keys are metadata field names and values are the metadata values. Returns: Dict[str, Any]: A sanitized dictionary ready for indexing in Redis. Raises: ValueError: If any metadata value is not one of the known types (string, int, float, or list of strings). """ def raise_error(key: str, value: Any) -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
raise ValueError( f"Metadata value for key '{key}' must be a string, int, " + f"float, or list of strings. Got {type(value).__name__}" ) clean_meta: Dict[str, Union[str, float, int]] = {} for key, value in metadata.items(): if not value: clean_meta[key] = "" continue if isinstance(value, (str, int, float)): clean_meta[key] = value elif isinstance(value, (list, tuple)): if not value or isinstance(value[0], str): clean_meta[key] = REDIS_TAG_SEPARATOR.join(value) else: raise_error(key, value) else: raise_error(key, value) return clean_meta class RedisVectorStoreRetriever(VectorStoreRetriever):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
"""Retriever for Redis VectorStore.""" vectorstore: Redis """Redis VectorStore.""" search_type: str = "similarity" """Type of search to perform. Can be either 'similarity', 'similarity_distance_threshold', 'similarity_score_threshold' """ search_kwargs: Dict[str, Any] = { "k": 4, "score_threshold": 0.9, "distance_threshold": None, } """Default search kwargs.""" allowed_search_types = [ "similarity", "similarity_distance_threshold", "similarity_score_threshold", "mmr", ] """Allowed search types.""" class Config:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
"""Configuration for this pydantic object.""" arbitrary_types_allowed = True def _get_relevant_documents( self, query: str, *, run_manager: CallbackManagerForRetrieverRun ) -> List[Document]: if self.search_type == "similarity": docs = self.vectorstore.similarity_search(query, **self.search_kwargs) elif self.search_type == "similarity_distance_threshold": if self.search_kwargs["distance_threshold"] is None: raise ValueError( "distance_threshold must be provided for " + "similarity_distance_threshold retriever" ) docs = self.vectorstore.similarity_search(query, **self.search_kwargs) elif self.search_type == "similarity_score_threshold": docs_and_similarities = ( self.vectorstore.similarity_search_with_relevance_scores( query, **self.search_kwargs ) ) docs = [doc for doc, _ in docs_and_similarities] elif self.search_type == "mmr": docs = self.vectorstore.max_marginal_relevance_search( query, **self.search_kwargs ) else: raise ValueError(f"search_type of {self.search_type} not allowed.") return docs async def _aget_relevant_documents(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,197
Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"]
### System Info Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index". You'll get this exception while using redis as retriever: ![Screenshot from 2023-09-28 16-57-03](https://github.com/langchain-ai/langchain/assets/81446007/afae536e-7888-4183-93d0-bfa65a8845a2) ### Who can help? _No response_ ### Information - [x] The official example notebooks/scripts - [ ] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction ![Screenshot from 2023-09-28 16-13-18](https://github.com/langchain-ai/langchain/assets/81446007/281206ed-b7c6-4b18-a3cc-25874fec7a06) The error is here: ![Screenshot from 2023-09-28 16-32-00](https://github.com/langchain-ai/langchain/assets/81446007/f6bf52fb-d0ef-4f38-b247-475522bdfece) If you look for the index you'll get (empty list or set). This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason. I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps. ### Expected behavior Expected behavior inside Redis: "docs:indexname_:12ss2sadd"
https://github.com/langchain-ai/langchain/issues/11197
https://github.com/langchain-ai/langchain/pull/11257
079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a
d5c2ce7c2e1179907400f2c96fc6309a54cbce6a
"2023-09-28T19:57:36Z"
python
"2023-10-24T17:51:25Z"
libs/langchain/langchain/vectorstores/redis/base.py
self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun ) -> List[Document]: raise NotImplementedError("RedisVectorStoreRetriever does not support async") def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]: """Add documents to vectorstore.""" return self.vectorstore.add_documents(documents, **kwargs) async def aadd_documents( self, documents: List[Document], **kwargs: Any ) -> List[str]: """Add documents to vectorstore.""" return await self.vectorstore.aadd_documents(documents, **kwargs)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,907
RdfGraph schema retrieval queries for the relation types are not linked by the correct comment variable
### System Info langchain = 0.0.251 Python = 3.10.11 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Create an OWL ontology called `dbpedia_sample.ttl` with the following: ``` turtle @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix dcterms: <http://purl.org/dc/terms/> . @prefix wikidata: <http://www.wikidata.org/entity/> . @prefix owl: <http://www.w3.org/2002/07/owl#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix prov: <http://www.w3.org/ns/prov#> . @prefix : <http://dbpedia.org/ontology/> . :Actor a owl:Class ; rdfs:comment "An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity."@en ; rdfs:label "actor"@en ; rdfs:subClassOf :Artist ; owl:equivalentClass wikidata:Q33999 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:Actor> . :AdministrativeRegion a owl:Class ; rdfs:comment "A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)"@en ; rdfs:label "administrative region"@en ; rdfs:subClassOf :Region ; owl:equivalentClass <http://schema.org/AdministrativeArea>, wikidata:Q3455524 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:AdministrativeRegion> . :birthPlace a rdf:Property, owl:ObjectProperty ; rdfs:comment "where the person was born"@en ; rdfs:domain :Animal ; rdfs:label "birth place"@en ; rdfs:range :Place ; rdfs:subPropertyOf dul:hasLocation ; owl:equivalentProperty <http://schema.org/birthPlace>, wikidata:P19 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyProperty:birthPlace> . ``` 2. Run ``` python from langchain.graphs import RdfGraph graph = RdfGraph( source_file="dbpedia_sample.ttl", serialization="ttl", standard="owl" ) print(graph.get_schema) ``` 3. Output ``` In the following, each IRI is followed by the local name and optionally its description in parentheses. The OWL graph supports the following node types: <http://dbpedia.org/ontology/Actor> (Actor, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.), <http://dbpedia.org/ontology/AdministrativeRegion> (AdministrativeRegion, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)) The OWL graph supports the following object properties, i.e., relationships between objects: <http://dbpedia.org/ontology/birthPlace> (birthPlace, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.), <http://dbpedia.org/ontology/birthPlace> (birthPlace, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)), <http://dbpedia.org/ontology/birthPlace> (birthPlace, where the person was born) The OWL graph supports the following data properties, i.e., relationships between objects and literals: ``` ### Expected behavior The issue is that in the SPARQL queries getting the properties the `rdfs:comment` triple pattern always refers to the variable `?cls` which obviously comes from copy/paste code. For example, getting the RDFS properties via ``` python rel_query_rdf = prefixes["rdfs"] + ( """SELECT DISTINCT ?rel ?com\n""" """WHERE { \n""" """ ?subj ?rel ?obj . \n""" """ OPTIONAL { ?cls rdfs:comment ?com } \n""" """}""" ) ``` you can see that the `OPTIONAL` clause refers to `?cls`, but it should be `?rel`. The same holds for all other queries regarding properties. The current status leads to a cartesian product of properties and all `rdfs:comment` vlaues in the dataset, which can be horribly large and of course leads to misleading and huge prompts (see the output of my sample in the "reproduction" part)
https://github.com/langchain-ai/langchain/issues/8907
https://github.com/langchain-ai/langchain/pull/9136
d9f1bcf366b5a66021d246d8e9c56e76fe60ead1
cce132d1460b4f52541cb4a6f13219fb8fe4f907
"2023-08-08T10:57:54Z"
python
"2023-10-25T20:36:57Z"
libs/langchain/langchain/graphs/rdf_graph.py
from __future__ import annotations from typing import ( TYPE_CHECKING, List, Optional, ) if TYPE_CHECKING: import rdflib prefixes = { "owl": """PREFIX owl: <http://www.w3.org/2002/07/owl#>\n""", "rdf": """PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>\n""", "rdfs": """PREFIX rdfs: <http://www.w3.org/2000/01/rdf-schema#>\n""", "xsd": """PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>\n""", } cls_query_rdf = prefixes["rdfs"] + ( """SELECT DISTINCT ?cls ?com\n""" """WHERE { \n"""
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,907
RdfGraph schema retrieval queries for the relation types are not linked by the correct comment variable
### System Info langchain = 0.0.251 Python = 3.10.11 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Create an OWL ontology called `dbpedia_sample.ttl` with the following: ``` turtle @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix dcterms: <http://purl.org/dc/terms/> . @prefix wikidata: <http://www.wikidata.org/entity/> . @prefix owl: <http://www.w3.org/2002/07/owl#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix prov: <http://www.w3.org/ns/prov#> . @prefix : <http://dbpedia.org/ontology/> . :Actor a owl:Class ; rdfs:comment "An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity."@en ; rdfs:label "actor"@en ; rdfs:subClassOf :Artist ; owl:equivalentClass wikidata:Q33999 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:Actor> . :AdministrativeRegion a owl:Class ; rdfs:comment "A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)"@en ; rdfs:label "administrative region"@en ; rdfs:subClassOf :Region ; owl:equivalentClass <http://schema.org/AdministrativeArea>, wikidata:Q3455524 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:AdministrativeRegion> . :birthPlace a rdf:Property, owl:ObjectProperty ; rdfs:comment "where the person was born"@en ; rdfs:domain :Animal ; rdfs:label "birth place"@en ; rdfs:range :Place ; rdfs:subPropertyOf dul:hasLocation ; owl:equivalentProperty <http://schema.org/birthPlace>, wikidata:P19 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyProperty:birthPlace> . ``` 2. Run ``` python from langchain.graphs import RdfGraph graph = RdfGraph( source_file="dbpedia_sample.ttl", serialization="ttl", standard="owl" ) print(graph.get_schema) ``` 3. Output ``` In the following, each IRI is followed by the local name and optionally its description in parentheses. The OWL graph supports the following node types: <http://dbpedia.org/ontology/Actor> (Actor, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.), <http://dbpedia.org/ontology/AdministrativeRegion> (AdministrativeRegion, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)) The OWL graph supports the following object properties, i.e., relationships between objects: <http://dbpedia.org/ontology/birthPlace> (birthPlace, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.), <http://dbpedia.org/ontology/birthPlace> (birthPlace, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)), <http://dbpedia.org/ontology/birthPlace> (birthPlace, where the person was born) The OWL graph supports the following data properties, i.e., relationships between objects and literals: ``` ### Expected behavior The issue is that in the SPARQL queries getting the properties the `rdfs:comment` triple pattern always refers to the variable `?cls` which obviously comes from copy/paste code. For example, getting the RDFS properties via ``` python rel_query_rdf = prefixes["rdfs"] + ( """SELECT DISTINCT ?rel ?com\n""" """WHERE { \n""" """ ?subj ?rel ?obj . \n""" """ OPTIONAL { ?cls rdfs:comment ?com } \n""" """}""" ) ``` you can see that the `OPTIONAL` clause refers to `?cls`, but it should be `?rel`. The same holds for all other queries regarding properties. The current status leads to a cartesian product of properties and all `rdfs:comment` vlaues in the dataset, which can be horribly large and of course leads to misleading and huge prompts (see the output of my sample in the "reproduction" part)
https://github.com/langchain-ai/langchain/issues/8907
https://github.com/langchain-ai/langchain/pull/9136
d9f1bcf366b5a66021d246d8e9c56e76fe60ead1
cce132d1460b4f52541cb4a6f13219fb8fe4f907
"2023-08-08T10:57:54Z"
python
"2023-10-25T20:36:57Z"
libs/langchain/langchain/graphs/rdf_graph.py
""" ?instance a ?cls . \n""" """ OPTIONAL { ?cls rdfs:comment ?com } \n""" """}""" ) cls_query_rdfs = prefixes["rdfs"] + ( """SELECT DISTINCT ?cls ?com\n""" """WHERE { \n""" """ ?instance a/rdfs:subClassOf* ?cls . \n""" """ OPTIONAL { ?cls rdfs:comment ?com } \n""" """}""" ) cls_query_owl = prefixes["rdfs"] + ( """SELECT DISTINCT ?cls ?com\n""" """WHERE { \n""" """ ?instance a/rdfs:subClassOf* ?cls . \n""" """ FILTER (isIRI(?cls)) . \n""" """ OPTIONAL { ?cls rdfs:comment ?com } \n""" """}""" ) rel_query_rdf = prefixes["rdfs"] + ( """SELECT DISTINCT ?rel ?com\n""" """WHERE { \n""" """ ?subj ?rel ?obj . \n""" """ OPTIONAL { ?cls rdfs:comment ?com } \n""" """}""" ) rel_query_rdfs = ( prefixes["rdf"] + prefixes["rdfs"] + (
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,907
RdfGraph schema retrieval queries for the relation types are not linked by the correct comment variable
### System Info langchain = 0.0.251 Python = 3.10.11 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Create an OWL ontology called `dbpedia_sample.ttl` with the following: ``` turtle @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix dcterms: <http://purl.org/dc/terms/> . @prefix wikidata: <http://www.wikidata.org/entity/> . @prefix owl: <http://www.w3.org/2002/07/owl#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix prov: <http://www.w3.org/ns/prov#> . @prefix : <http://dbpedia.org/ontology/> . :Actor a owl:Class ; rdfs:comment "An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity."@en ; rdfs:label "actor"@en ; rdfs:subClassOf :Artist ; owl:equivalentClass wikidata:Q33999 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:Actor> . :AdministrativeRegion a owl:Class ; rdfs:comment "A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)"@en ; rdfs:label "administrative region"@en ; rdfs:subClassOf :Region ; owl:equivalentClass <http://schema.org/AdministrativeArea>, wikidata:Q3455524 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:AdministrativeRegion> . :birthPlace a rdf:Property, owl:ObjectProperty ; rdfs:comment "where the person was born"@en ; rdfs:domain :Animal ; rdfs:label "birth place"@en ; rdfs:range :Place ; rdfs:subPropertyOf dul:hasLocation ; owl:equivalentProperty <http://schema.org/birthPlace>, wikidata:P19 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyProperty:birthPlace> . ``` 2. Run ``` python from langchain.graphs import RdfGraph graph = RdfGraph( source_file="dbpedia_sample.ttl", serialization="ttl", standard="owl" ) print(graph.get_schema) ``` 3. Output ``` In the following, each IRI is followed by the local name and optionally its description in parentheses. The OWL graph supports the following node types: <http://dbpedia.org/ontology/Actor> (Actor, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.), <http://dbpedia.org/ontology/AdministrativeRegion> (AdministrativeRegion, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)) The OWL graph supports the following object properties, i.e., relationships between objects: <http://dbpedia.org/ontology/birthPlace> (birthPlace, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.), <http://dbpedia.org/ontology/birthPlace> (birthPlace, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)), <http://dbpedia.org/ontology/birthPlace> (birthPlace, where the person was born) The OWL graph supports the following data properties, i.e., relationships between objects and literals: ``` ### Expected behavior The issue is that in the SPARQL queries getting the properties the `rdfs:comment` triple pattern always refers to the variable `?cls` which obviously comes from copy/paste code. For example, getting the RDFS properties via ``` python rel_query_rdf = prefixes["rdfs"] + ( """SELECT DISTINCT ?rel ?com\n""" """WHERE { \n""" """ ?subj ?rel ?obj . \n""" """ OPTIONAL { ?cls rdfs:comment ?com } \n""" """}""" ) ``` you can see that the `OPTIONAL` clause refers to `?cls`, but it should be `?rel`. The same holds for all other queries regarding properties. The current status leads to a cartesian product of properties and all `rdfs:comment` vlaues in the dataset, which can be horribly large and of course leads to misleading and huge prompts (see the output of my sample in the "reproduction" part)
https://github.com/langchain-ai/langchain/issues/8907
https://github.com/langchain-ai/langchain/pull/9136
d9f1bcf366b5a66021d246d8e9c56e76fe60ead1
cce132d1460b4f52541cb4a6f13219fb8fe4f907
"2023-08-08T10:57:54Z"
python
"2023-10-25T20:36:57Z"
libs/langchain/langchain/graphs/rdf_graph.py
"""SELECT DISTINCT ?rel ?com\n""" """WHERE { \n""" """ ?rel a/rdfs:subPropertyOf* rdf:Property . \n""" """ OPTIONAL { ?cls rdfs:comment ?com } \n""" """}""" ) ) op_query_owl = ( prefixes["rdfs"] + prefixes["owl"] + ( """SELECT DISTINCT ?op ?com\n""" """WHERE { \n""" """ ?op a/rdfs:subPropertyOf* owl:ObjectProperty . \n""" """ OPTIONAL { ?cls rdfs:comment ?com } \n""" """}""" ) ) dp_query_owl = ( prefixes["rdfs"] + prefixes["owl"] + ( """SELECT DISTINCT ?dp ?com\n""" """WHERE { \n""" """ ?dp a/rdfs:subPropertyOf* owl:DatatypeProperty . \n""" """ OPTIONAL { ?cls rdfs:comment ?com } \n""" """}""" ) ) class RdfGraph:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,907
RdfGraph schema retrieval queries for the relation types are not linked by the correct comment variable
### System Info langchain = 0.0.251 Python = 3.10.11 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Create an OWL ontology called `dbpedia_sample.ttl` with the following: ``` turtle @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix dcterms: <http://purl.org/dc/terms/> . @prefix wikidata: <http://www.wikidata.org/entity/> . @prefix owl: <http://www.w3.org/2002/07/owl#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix prov: <http://www.w3.org/ns/prov#> . @prefix : <http://dbpedia.org/ontology/> . :Actor a owl:Class ; rdfs:comment "An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity."@en ; rdfs:label "actor"@en ; rdfs:subClassOf :Artist ; owl:equivalentClass wikidata:Q33999 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:Actor> . :AdministrativeRegion a owl:Class ; rdfs:comment "A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)"@en ; rdfs:label "administrative region"@en ; rdfs:subClassOf :Region ; owl:equivalentClass <http://schema.org/AdministrativeArea>, wikidata:Q3455524 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:AdministrativeRegion> . :birthPlace a rdf:Property, owl:ObjectProperty ; rdfs:comment "where the person was born"@en ; rdfs:domain :Animal ; rdfs:label "birth place"@en ; rdfs:range :Place ; rdfs:subPropertyOf dul:hasLocation ; owl:equivalentProperty <http://schema.org/birthPlace>, wikidata:P19 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyProperty:birthPlace> . ``` 2. Run ``` python from langchain.graphs import RdfGraph graph = RdfGraph( source_file="dbpedia_sample.ttl", serialization="ttl", standard="owl" ) print(graph.get_schema) ``` 3. Output ``` In the following, each IRI is followed by the local name and optionally its description in parentheses. The OWL graph supports the following node types: <http://dbpedia.org/ontology/Actor> (Actor, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.), <http://dbpedia.org/ontology/AdministrativeRegion> (AdministrativeRegion, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)) The OWL graph supports the following object properties, i.e., relationships between objects: <http://dbpedia.org/ontology/birthPlace> (birthPlace, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.), <http://dbpedia.org/ontology/birthPlace> (birthPlace, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)), <http://dbpedia.org/ontology/birthPlace> (birthPlace, where the person was born) The OWL graph supports the following data properties, i.e., relationships between objects and literals: ``` ### Expected behavior The issue is that in the SPARQL queries getting the properties the `rdfs:comment` triple pattern always refers to the variable `?cls` which obviously comes from copy/paste code. For example, getting the RDFS properties via ``` python rel_query_rdf = prefixes["rdfs"] + ( """SELECT DISTINCT ?rel ?com\n""" """WHERE { \n""" """ ?subj ?rel ?obj . \n""" """ OPTIONAL { ?cls rdfs:comment ?com } \n""" """}""" ) ``` you can see that the `OPTIONAL` clause refers to `?cls`, but it should be `?rel`. The same holds for all other queries regarding properties. The current status leads to a cartesian product of properties and all `rdfs:comment` vlaues in the dataset, which can be horribly large and of course leads to misleading and huge prompts (see the output of my sample in the "reproduction" part)
https://github.com/langchain-ai/langchain/issues/8907
https://github.com/langchain-ai/langchain/pull/9136
d9f1bcf366b5a66021d246d8e9c56e76fe60ead1
cce132d1460b4f52541cb4a6f13219fb8fe4f907
"2023-08-08T10:57:54Z"
python
"2023-10-25T20:36:57Z"
libs/langchain/langchain/graphs/rdf_graph.py
"""RDFlib wrapper for graph operations. Modes: * local: Local file - can be queried and changed * online: Online file - can only be queried, changes can be stored locally * store: Triple store - can be queried and changed if update_endpoint available Together with a source file, the serialization should be specified. *Security note*: Make sure that the database connection uses credentials that are narrowly-scoped to only include necessary permissions. Failure to do so may result in data corruption or loss, since the calling code may attempt commands that would result in deletion, mutation of data if appropriately prompted or reading sensitive data if such data is present in the database. The best way to guard against such negative outcomes is to (as appropriate) limit the permissions granted to the credentials used with this tool. See https://python.langchain.com/docs/security for more information. """ def __init__( self, source_file: Optional[str] = None, serialization: Optional[str] = "ttl", query_endpoint: Optional[str] = None, update_endpoint: Optional[str] = None, standard: Optional[str] = "rdf", local_copy: Optional[str] = None, ) -> None: """ Set up the RDFlib graph :param source_file: either a path for a local file or a URL :param serialization: serialization of the input :param query_endpoint: SPARQL endpoint for queries, read access
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,907
RdfGraph schema retrieval queries for the relation types are not linked by the correct comment variable
### System Info langchain = 0.0.251 Python = 3.10.11 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Create an OWL ontology called `dbpedia_sample.ttl` with the following: ``` turtle @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix dcterms: <http://purl.org/dc/terms/> . @prefix wikidata: <http://www.wikidata.org/entity/> . @prefix owl: <http://www.w3.org/2002/07/owl#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix prov: <http://www.w3.org/ns/prov#> . @prefix : <http://dbpedia.org/ontology/> . :Actor a owl:Class ; rdfs:comment "An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity."@en ; rdfs:label "actor"@en ; rdfs:subClassOf :Artist ; owl:equivalentClass wikidata:Q33999 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:Actor> . :AdministrativeRegion a owl:Class ; rdfs:comment "A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)"@en ; rdfs:label "administrative region"@en ; rdfs:subClassOf :Region ; owl:equivalentClass <http://schema.org/AdministrativeArea>, wikidata:Q3455524 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:AdministrativeRegion> . :birthPlace a rdf:Property, owl:ObjectProperty ; rdfs:comment "where the person was born"@en ; rdfs:domain :Animal ; rdfs:label "birth place"@en ; rdfs:range :Place ; rdfs:subPropertyOf dul:hasLocation ; owl:equivalentProperty <http://schema.org/birthPlace>, wikidata:P19 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyProperty:birthPlace> . ``` 2. Run ``` python from langchain.graphs import RdfGraph graph = RdfGraph( source_file="dbpedia_sample.ttl", serialization="ttl", standard="owl" ) print(graph.get_schema) ``` 3. Output ``` In the following, each IRI is followed by the local name and optionally its description in parentheses. The OWL graph supports the following node types: <http://dbpedia.org/ontology/Actor> (Actor, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.), <http://dbpedia.org/ontology/AdministrativeRegion> (AdministrativeRegion, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)) The OWL graph supports the following object properties, i.e., relationships between objects: <http://dbpedia.org/ontology/birthPlace> (birthPlace, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.), <http://dbpedia.org/ontology/birthPlace> (birthPlace, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)), <http://dbpedia.org/ontology/birthPlace> (birthPlace, where the person was born) The OWL graph supports the following data properties, i.e., relationships between objects and literals: ``` ### Expected behavior The issue is that in the SPARQL queries getting the properties the `rdfs:comment` triple pattern always refers to the variable `?cls` which obviously comes from copy/paste code. For example, getting the RDFS properties via ``` python rel_query_rdf = prefixes["rdfs"] + ( """SELECT DISTINCT ?rel ?com\n""" """WHERE { \n""" """ ?subj ?rel ?obj . \n""" """ OPTIONAL { ?cls rdfs:comment ?com } \n""" """}""" ) ``` you can see that the `OPTIONAL` clause refers to `?cls`, but it should be `?rel`. The same holds for all other queries regarding properties. The current status leads to a cartesian product of properties and all `rdfs:comment` vlaues in the dataset, which can be horribly large and of course leads to misleading and huge prompts (see the output of my sample in the "reproduction" part)
https://github.com/langchain-ai/langchain/issues/8907
https://github.com/langchain-ai/langchain/pull/9136
d9f1bcf366b5a66021d246d8e9c56e76fe60ead1
cce132d1460b4f52541cb4a6f13219fb8fe4f907
"2023-08-08T10:57:54Z"
python
"2023-10-25T20:36:57Z"
libs/langchain/langchain/graphs/rdf_graph.py
:param update_endpoint: SPARQL endpoint for UPDATE queries, write access :param standard: RDF, RDFS, or OWL :param local_copy: new local copy for storing changes """ self.source_file = source_file self.serialization = serialization self.query_endpoint = query_endpoint self.update_endpoint = update_endpoint self.standard = standard self.local_copy = local_copy try: import rdflib from rdflib.graph import DATASET_DEFAULT_GRAPH_ID as default from rdflib.plugins.stores import sparqlstore except ImportError: raise ValueError( "Could not import rdflib python package. " "Please install it with `pip install rdflib`." ) if self.standard not in (supported_standards := ("rdf", "rdfs", "owl")): raise ValueError( f"Invalid standard. Supported standards are: {supported_standards}." ) if ( not source_file and not query_endpoint or source_file and (query_endpoint or update_endpoint) ): raise ValueError(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,907
RdfGraph schema retrieval queries for the relation types are not linked by the correct comment variable
### System Info langchain = 0.0.251 Python = 3.10.11 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Create an OWL ontology called `dbpedia_sample.ttl` with the following: ``` turtle @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix dcterms: <http://purl.org/dc/terms/> . @prefix wikidata: <http://www.wikidata.org/entity/> . @prefix owl: <http://www.w3.org/2002/07/owl#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix prov: <http://www.w3.org/ns/prov#> . @prefix : <http://dbpedia.org/ontology/> . :Actor a owl:Class ; rdfs:comment "An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity."@en ; rdfs:label "actor"@en ; rdfs:subClassOf :Artist ; owl:equivalentClass wikidata:Q33999 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:Actor> . :AdministrativeRegion a owl:Class ; rdfs:comment "A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)"@en ; rdfs:label "administrative region"@en ; rdfs:subClassOf :Region ; owl:equivalentClass <http://schema.org/AdministrativeArea>, wikidata:Q3455524 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:AdministrativeRegion> . :birthPlace a rdf:Property, owl:ObjectProperty ; rdfs:comment "where the person was born"@en ; rdfs:domain :Animal ; rdfs:label "birth place"@en ; rdfs:range :Place ; rdfs:subPropertyOf dul:hasLocation ; owl:equivalentProperty <http://schema.org/birthPlace>, wikidata:P19 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyProperty:birthPlace> . ``` 2. Run ``` python from langchain.graphs import RdfGraph graph = RdfGraph( source_file="dbpedia_sample.ttl", serialization="ttl", standard="owl" ) print(graph.get_schema) ``` 3. Output ``` In the following, each IRI is followed by the local name and optionally its description in parentheses. The OWL graph supports the following node types: <http://dbpedia.org/ontology/Actor> (Actor, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.), <http://dbpedia.org/ontology/AdministrativeRegion> (AdministrativeRegion, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)) The OWL graph supports the following object properties, i.e., relationships between objects: <http://dbpedia.org/ontology/birthPlace> (birthPlace, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.), <http://dbpedia.org/ontology/birthPlace> (birthPlace, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)), <http://dbpedia.org/ontology/birthPlace> (birthPlace, where the person was born) The OWL graph supports the following data properties, i.e., relationships between objects and literals: ``` ### Expected behavior The issue is that in the SPARQL queries getting the properties the `rdfs:comment` triple pattern always refers to the variable `?cls` which obviously comes from copy/paste code. For example, getting the RDFS properties via ``` python rel_query_rdf = prefixes["rdfs"] + ( """SELECT DISTINCT ?rel ?com\n""" """WHERE { \n""" """ ?subj ?rel ?obj . \n""" """ OPTIONAL { ?cls rdfs:comment ?com } \n""" """}""" ) ``` you can see that the `OPTIONAL` clause refers to `?cls`, but it should be `?rel`. The same holds for all other queries regarding properties. The current status leads to a cartesian product of properties and all `rdfs:comment` vlaues in the dataset, which can be horribly large and of course leads to misleading and huge prompts (see the output of my sample in the "reproduction" part)
https://github.com/langchain-ai/langchain/issues/8907
https://github.com/langchain-ai/langchain/pull/9136
d9f1bcf366b5a66021d246d8e9c56e76fe60ead1
cce132d1460b4f52541cb4a6f13219fb8fe4f907
"2023-08-08T10:57:54Z"
python
"2023-10-25T20:36:57Z"
libs/langchain/langchain/graphs/rdf_graph.py
"Could not unambiguously initialize the graph wrapper. " "Specify either a file (local or online) via the source_file " "or a triple store via the endpoints." ) if source_file: if source_file.startswith("http"): self.mode = "online" else: self.mode = "local" if self.local_copy is None: self.local_copy = self.source_file self.graph = rdflib.Graph() self.graph.parse(source_file, format=self.serialization) if query_endpoint: self.mode = "store" if not update_endpoint: self._store = sparqlstore.SPARQLStore() self._store.open(query_endpoint) else: self._store = sparqlstore.SPARQLUpdateStore() self._store.open((query_endpoint, update_endpoint)) self.graph = rdflib.Graph(self._store, identifier=default) if not len(self.graph): raise AssertionError("The graph is empty.") self.schema = "" self.load_schema() @property def get_schema(self) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,907
RdfGraph schema retrieval queries for the relation types are not linked by the correct comment variable
### System Info langchain = 0.0.251 Python = 3.10.11 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Create an OWL ontology called `dbpedia_sample.ttl` with the following: ``` turtle @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix dcterms: <http://purl.org/dc/terms/> . @prefix wikidata: <http://www.wikidata.org/entity/> . @prefix owl: <http://www.w3.org/2002/07/owl#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix prov: <http://www.w3.org/ns/prov#> . @prefix : <http://dbpedia.org/ontology/> . :Actor a owl:Class ; rdfs:comment "An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity."@en ; rdfs:label "actor"@en ; rdfs:subClassOf :Artist ; owl:equivalentClass wikidata:Q33999 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:Actor> . :AdministrativeRegion a owl:Class ; rdfs:comment "A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)"@en ; rdfs:label "administrative region"@en ; rdfs:subClassOf :Region ; owl:equivalentClass <http://schema.org/AdministrativeArea>, wikidata:Q3455524 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:AdministrativeRegion> . :birthPlace a rdf:Property, owl:ObjectProperty ; rdfs:comment "where the person was born"@en ; rdfs:domain :Animal ; rdfs:label "birth place"@en ; rdfs:range :Place ; rdfs:subPropertyOf dul:hasLocation ; owl:equivalentProperty <http://schema.org/birthPlace>, wikidata:P19 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyProperty:birthPlace> . ``` 2. Run ``` python from langchain.graphs import RdfGraph graph = RdfGraph( source_file="dbpedia_sample.ttl", serialization="ttl", standard="owl" ) print(graph.get_schema) ``` 3. Output ``` In the following, each IRI is followed by the local name and optionally its description in parentheses. The OWL graph supports the following node types: <http://dbpedia.org/ontology/Actor> (Actor, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.), <http://dbpedia.org/ontology/AdministrativeRegion> (AdministrativeRegion, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)) The OWL graph supports the following object properties, i.e., relationships between objects: <http://dbpedia.org/ontology/birthPlace> (birthPlace, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.), <http://dbpedia.org/ontology/birthPlace> (birthPlace, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)), <http://dbpedia.org/ontology/birthPlace> (birthPlace, where the person was born) The OWL graph supports the following data properties, i.e., relationships between objects and literals: ``` ### Expected behavior The issue is that in the SPARQL queries getting the properties the `rdfs:comment` triple pattern always refers to the variable `?cls` which obviously comes from copy/paste code. For example, getting the RDFS properties via ``` python rel_query_rdf = prefixes["rdfs"] + ( """SELECT DISTINCT ?rel ?com\n""" """WHERE { \n""" """ ?subj ?rel ?obj . \n""" """ OPTIONAL { ?cls rdfs:comment ?com } \n""" """}""" ) ``` you can see that the `OPTIONAL` clause refers to `?cls`, but it should be `?rel`. The same holds for all other queries regarding properties. The current status leads to a cartesian product of properties and all `rdfs:comment` vlaues in the dataset, which can be horribly large and of course leads to misleading and huge prompts (see the output of my sample in the "reproduction" part)
https://github.com/langchain-ai/langchain/issues/8907
https://github.com/langchain-ai/langchain/pull/9136
d9f1bcf366b5a66021d246d8e9c56e76fe60ead1
cce132d1460b4f52541cb4a6f13219fb8fe4f907
"2023-08-08T10:57:54Z"
python
"2023-10-25T20:36:57Z"
libs/langchain/langchain/graphs/rdf_graph.py
""" Returns the schema of the graph database. """ return self.schema def query( self, query: str, ) -> List[rdflib.query.ResultRow]: """ Query the graph. """ from rdflib.exceptions import ParserError from rdflib.query import ResultRow try: res = self.graph.query(query) except ParserError as e: raise ValueError("Generated SPARQL statement is invalid\n" f"{e}") return [r for r in res if isinstance(r, ResultRow)] def update(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,907
RdfGraph schema retrieval queries for the relation types are not linked by the correct comment variable
### System Info langchain = 0.0.251 Python = 3.10.11 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Create an OWL ontology called `dbpedia_sample.ttl` with the following: ``` turtle @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix dcterms: <http://purl.org/dc/terms/> . @prefix wikidata: <http://www.wikidata.org/entity/> . @prefix owl: <http://www.w3.org/2002/07/owl#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix prov: <http://www.w3.org/ns/prov#> . @prefix : <http://dbpedia.org/ontology/> . :Actor a owl:Class ; rdfs:comment "An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity."@en ; rdfs:label "actor"@en ; rdfs:subClassOf :Artist ; owl:equivalentClass wikidata:Q33999 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:Actor> . :AdministrativeRegion a owl:Class ; rdfs:comment "A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)"@en ; rdfs:label "administrative region"@en ; rdfs:subClassOf :Region ; owl:equivalentClass <http://schema.org/AdministrativeArea>, wikidata:Q3455524 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:AdministrativeRegion> . :birthPlace a rdf:Property, owl:ObjectProperty ; rdfs:comment "where the person was born"@en ; rdfs:domain :Animal ; rdfs:label "birth place"@en ; rdfs:range :Place ; rdfs:subPropertyOf dul:hasLocation ; owl:equivalentProperty <http://schema.org/birthPlace>, wikidata:P19 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyProperty:birthPlace> . ``` 2. Run ``` python from langchain.graphs import RdfGraph graph = RdfGraph( source_file="dbpedia_sample.ttl", serialization="ttl", standard="owl" ) print(graph.get_schema) ``` 3. Output ``` In the following, each IRI is followed by the local name and optionally its description in parentheses. The OWL graph supports the following node types: <http://dbpedia.org/ontology/Actor> (Actor, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.), <http://dbpedia.org/ontology/AdministrativeRegion> (AdministrativeRegion, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)) The OWL graph supports the following object properties, i.e., relationships between objects: <http://dbpedia.org/ontology/birthPlace> (birthPlace, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.), <http://dbpedia.org/ontology/birthPlace> (birthPlace, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)), <http://dbpedia.org/ontology/birthPlace> (birthPlace, where the person was born) The OWL graph supports the following data properties, i.e., relationships between objects and literals: ``` ### Expected behavior The issue is that in the SPARQL queries getting the properties the `rdfs:comment` triple pattern always refers to the variable `?cls` which obviously comes from copy/paste code. For example, getting the RDFS properties via ``` python rel_query_rdf = prefixes["rdfs"] + ( """SELECT DISTINCT ?rel ?com\n""" """WHERE { \n""" """ ?subj ?rel ?obj . \n""" """ OPTIONAL { ?cls rdfs:comment ?com } \n""" """}""" ) ``` you can see that the `OPTIONAL` clause refers to `?cls`, but it should be `?rel`. The same holds for all other queries regarding properties. The current status leads to a cartesian product of properties and all `rdfs:comment` vlaues in the dataset, which can be horribly large and of course leads to misleading and huge prompts (see the output of my sample in the "reproduction" part)
https://github.com/langchain-ai/langchain/issues/8907
https://github.com/langchain-ai/langchain/pull/9136
d9f1bcf366b5a66021d246d8e9c56e76fe60ead1
cce132d1460b4f52541cb4a6f13219fb8fe4f907
"2023-08-08T10:57:54Z"
python
"2023-10-25T20:36:57Z"
libs/langchain/langchain/graphs/rdf_graph.py
self, query: str, ) -> None: """ Update the graph. """ from rdflib.exceptions import ParserError try: self.graph.update(query) except ParserError as e: raise ValueError("Generated SPARQL statement is invalid\n" f"{e}") if self.local_copy: self.graph.serialize( destination=self.local_copy, format=self.local_copy.split(".")[-1] ) else: raise ValueError("No target file specified for saving the updated file.") @staticmethod def _get_local_name(iri: str) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,907
RdfGraph schema retrieval queries for the relation types are not linked by the correct comment variable
### System Info langchain = 0.0.251 Python = 3.10.11 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Create an OWL ontology called `dbpedia_sample.ttl` with the following: ``` turtle @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix dcterms: <http://purl.org/dc/terms/> . @prefix wikidata: <http://www.wikidata.org/entity/> . @prefix owl: <http://www.w3.org/2002/07/owl#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix prov: <http://www.w3.org/ns/prov#> . @prefix : <http://dbpedia.org/ontology/> . :Actor a owl:Class ; rdfs:comment "An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity."@en ; rdfs:label "actor"@en ; rdfs:subClassOf :Artist ; owl:equivalentClass wikidata:Q33999 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:Actor> . :AdministrativeRegion a owl:Class ; rdfs:comment "A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)"@en ; rdfs:label "administrative region"@en ; rdfs:subClassOf :Region ; owl:equivalentClass <http://schema.org/AdministrativeArea>, wikidata:Q3455524 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:AdministrativeRegion> . :birthPlace a rdf:Property, owl:ObjectProperty ; rdfs:comment "where the person was born"@en ; rdfs:domain :Animal ; rdfs:label "birth place"@en ; rdfs:range :Place ; rdfs:subPropertyOf dul:hasLocation ; owl:equivalentProperty <http://schema.org/birthPlace>, wikidata:P19 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyProperty:birthPlace> . ``` 2. Run ``` python from langchain.graphs import RdfGraph graph = RdfGraph( source_file="dbpedia_sample.ttl", serialization="ttl", standard="owl" ) print(graph.get_schema) ``` 3. Output ``` In the following, each IRI is followed by the local name and optionally its description in parentheses. The OWL graph supports the following node types: <http://dbpedia.org/ontology/Actor> (Actor, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.), <http://dbpedia.org/ontology/AdministrativeRegion> (AdministrativeRegion, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)) The OWL graph supports the following object properties, i.e., relationships between objects: <http://dbpedia.org/ontology/birthPlace> (birthPlace, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.), <http://dbpedia.org/ontology/birthPlace> (birthPlace, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)), <http://dbpedia.org/ontology/birthPlace> (birthPlace, where the person was born) The OWL graph supports the following data properties, i.e., relationships between objects and literals: ``` ### Expected behavior The issue is that in the SPARQL queries getting the properties the `rdfs:comment` triple pattern always refers to the variable `?cls` which obviously comes from copy/paste code. For example, getting the RDFS properties via ``` python rel_query_rdf = prefixes["rdfs"] + ( """SELECT DISTINCT ?rel ?com\n""" """WHERE { \n""" """ ?subj ?rel ?obj . \n""" """ OPTIONAL { ?cls rdfs:comment ?com } \n""" """}""" ) ``` you can see that the `OPTIONAL` clause refers to `?cls`, but it should be `?rel`. The same holds for all other queries regarding properties. The current status leads to a cartesian product of properties and all `rdfs:comment` vlaues in the dataset, which can be horribly large and of course leads to misleading and huge prompts (see the output of my sample in the "reproduction" part)
https://github.com/langchain-ai/langchain/issues/8907
https://github.com/langchain-ai/langchain/pull/9136
d9f1bcf366b5a66021d246d8e9c56e76fe60ead1
cce132d1460b4f52541cb4a6f13219fb8fe4f907
"2023-08-08T10:57:54Z"
python
"2023-10-25T20:36:57Z"
libs/langchain/langchain/graphs/rdf_graph.py
if "#" in iri: local_name = iri.split("#")[-1] elif "/" in iri: local_name = iri.split("/")[-1] else: raise ValueError(f"Unexpected IRI '{iri}', contains neither '#' nor '/'.") return local_name def _res_to_str(self, res: rdflib.query.ResultRow, var: str) -> str: return ( "<" + str(res[var]) + "> (" + self._get_local_name(res[var]) + ", " + str(res["com"]) + ")" ) def load_schema(self) -> None: """ Load the graph schema information. """ def _rdf_s_schema( classes: List[rdflib.query.ResultRow], relationships: List[rdflib.query.ResultRow], ) -> str: return ( f"In the following, each IRI is followed by the local name and " f"optionally its description in parentheses. \n" f"The RDF graph supports the following node types:\n" f'{", ".join([self._res_to_str(r, "cls") for r in classes])}\n'
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
8,907
RdfGraph schema retrieval queries for the relation types are not linked by the correct comment variable
### System Info langchain = 0.0.251 Python = 3.10.11 ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction 1. Create an OWL ontology called `dbpedia_sample.ttl` with the following: ``` turtle @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix dcterms: <http://purl.org/dc/terms/> . @prefix wikidata: <http://www.wikidata.org/entity/> . @prefix owl: <http://www.w3.org/2002/07/owl#> . @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . @prefix prov: <http://www.w3.org/ns/prov#> . @prefix : <http://dbpedia.org/ontology/> . :Actor a owl:Class ; rdfs:comment "An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity."@en ; rdfs:label "actor"@en ; rdfs:subClassOf :Artist ; owl:equivalentClass wikidata:Q33999 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:Actor> . :AdministrativeRegion a owl:Class ; rdfs:comment "A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)"@en ; rdfs:label "administrative region"@en ; rdfs:subClassOf :Region ; owl:equivalentClass <http://schema.org/AdministrativeArea>, wikidata:Q3455524 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyClass:AdministrativeRegion> . :birthPlace a rdf:Property, owl:ObjectProperty ; rdfs:comment "where the person was born"@en ; rdfs:domain :Animal ; rdfs:label "birth place"@en ; rdfs:range :Place ; rdfs:subPropertyOf dul:hasLocation ; owl:equivalentProperty <http://schema.org/birthPlace>, wikidata:P19 ; prov:wasDerivedFrom <http://mappings.dbpedia.org/index.php/OntologyProperty:birthPlace> . ``` 2. Run ``` python from langchain.graphs import RdfGraph graph = RdfGraph( source_file="dbpedia_sample.ttl", serialization="ttl", standard="owl" ) print(graph.get_schema) ``` 3. Output ``` In the following, each IRI is followed by the local name and optionally its description in parentheses. The OWL graph supports the following node types: <http://dbpedia.org/ontology/Actor> (Actor, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.), <http://dbpedia.org/ontology/AdministrativeRegion> (AdministrativeRegion, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)) The OWL graph supports the following object properties, i.e., relationships between objects: <http://dbpedia.org/ontology/birthPlace> (birthPlace, An actor or actress is a person who acts in a dramatic production and who works in film, television, theatre, or radio in that capacity.), <http://dbpedia.org/ontology/birthPlace> (birthPlace, A PopulatedPlace under the jurisdiction of an administrative body. This body may administer either a whole region or one or more adjacent Settlements (town administration)), <http://dbpedia.org/ontology/birthPlace> (birthPlace, where the person was born) The OWL graph supports the following data properties, i.e., relationships between objects and literals: ``` ### Expected behavior The issue is that in the SPARQL queries getting the properties the `rdfs:comment` triple pattern always refers to the variable `?cls` which obviously comes from copy/paste code. For example, getting the RDFS properties via ``` python rel_query_rdf = prefixes["rdfs"] + ( """SELECT DISTINCT ?rel ?com\n""" """WHERE { \n""" """ ?subj ?rel ?obj . \n""" """ OPTIONAL { ?cls rdfs:comment ?com } \n""" """}""" ) ``` you can see that the `OPTIONAL` clause refers to `?cls`, but it should be `?rel`. The same holds for all other queries regarding properties. The current status leads to a cartesian product of properties and all `rdfs:comment` vlaues in the dataset, which can be horribly large and of course leads to misleading and huge prompts (see the output of my sample in the "reproduction" part)
https://github.com/langchain-ai/langchain/issues/8907
https://github.com/langchain-ai/langchain/pull/9136
d9f1bcf366b5a66021d246d8e9c56e76fe60ead1
cce132d1460b4f52541cb4a6f13219fb8fe4f907
"2023-08-08T10:57:54Z"
python
"2023-10-25T20:36:57Z"
libs/langchain/langchain/graphs/rdf_graph.py
f"The RDF graph supports the following relationships:\n" f'{", ".join([self._res_to_str(r, "rel") for r in relationships])}\n' ) if self.standard == "rdf": clss = self.query(cls_query_rdf) rels = self.query(rel_query_rdf) self.schema = _rdf_s_schema(clss, rels) elif self.standard == "rdfs": clss = self.query(cls_query_rdfs) rels = self.query(rel_query_rdfs) self.schema = _rdf_s_schema(clss, rels) elif self.standard == "owl": clss = self.query(cls_query_owl) ops = self.query(op_query_owl) dps = self.query(dp_query_owl) self.schema = ( f"In the following, each IRI is followed by the local name and " f"optionally its description in parentheses. \n" f"The OWL graph supports the following node types:\n" f'{", ".join([self._res_to_str(r, "cls") for r in clss])}\n' f"The OWL graph supports the following object properties, " f"i.e., relationships between objects:\n" f'{", ".join([self._res_to_str(r, "op") for r in ops])}\n' f"The OWL graph supports the following data properties, " f"i.e., relationships between objects and literals:\n" f'{", ".join([self._res_to_str(r, "dp") for r in dps])}\n' ) else: raise ValueError(f"Mode '{self.standard}' is currently not supported.")
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,068
feat: Add Linearized output to Textract PDFLoader
### Feature request Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints. Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call ### Motivation Improve downstream LLM accuracy ### Your contribution I'll submit a PR for this feature.
https://github.com/langchain-ai/langchain/issues/12068
https://github.com/langchain-ai/langchain/pull/12446
a7d5e0ce8a30bd81b8f7b544a4859c31d5f25445
0c7f1d8b219e87e3ffd14a15a452622c532c7e95
"2023-10-20T08:28:07Z"
python
"2023-10-31T01:02:10Z"
libs/langchain/langchain/document_loaders/parsers/pdf.py
"""Module contains common parsers for PDFs.""" from __future__ import annotations import warnings from typing import ( TYPE_CHECKING, Any, Iterable, Iterator, Mapping, Optional, Sequence,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,068
feat: Add Linearized output to Textract PDFLoader
### Feature request Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints. Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call ### Motivation Improve downstream LLM accuracy ### Your contribution I'll submit a PR for this feature.
https://github.com/langchain-ai/langchain/issues/12068
https://github.com/langchain-ai/langchain/pull/12446
a7d5e0ce8a30bd81b8f7b544a4859c31d5f25445
0c7f1d8b219e87e3ffd14a15a452622c532c7e95
"2023-10-20T08:28:07Z"
python
"2023-10-31T01:02:10Z"
libs/langchain/langchain/document_loaders/parsers/pdf.py
Union, ) from urllib.parse import urlparse import numpy as np from langchain.document_loaders.base import BaseBlobParser from langchain.document_loaders.blob_loaders import Blob from langchain.schema import Document if TYPE_CHECKING: import fitz.fitz import pdfminer.layout import pdfplumber.page import pypdf._page import pypdfium2._helpers.page _PDF_FILTER_WITH_LOSS = ["DCTDecode", "DCT", "JPXDecode"] _PDF_FILTER_WITHOUT_LOSS = [ "LZWDecode", "LZW", "FlateDecode", "Fl", "ASCII85Decode", "A85", "ASCIIHexDecode", "AHx", "RunLengthDecode", "RL", "CCITTFaxDecode", "CCF", "JBIG2Decode", ] def extract_from_images_with_rapidocr(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,068
feat: Add Linearized output to Textract PDFLoader
### Feature request Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints. Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call ### Motivation Improve downstream LLM accuracy ### Your contribution I'll submit a PR for this feature.
https://github.com/langchain-ai/langchain/issues/12068
https://github.com/langchain-ai/langchain/pull/12446
a7d5e0ce8a30bd81b8f7b544a4859c31d5f25445
0c7f1d8b219e87e3ffd14a15a452622c532c7e95
"2023-10-20T08:28:07Z"
python
"2023-10-31T01:02:10Z"
libs/langchain/langchain/document_loaders/parsers/pdf.py
images: Sequence[Union[Iterable[np.ndarray], bytes]] ) -> str: """Extract text from images with RapidOCR. Args: images: Images to extract text from. Returns: Text extracted from images. Raises: ImportError: If `rapidocr-onnxruntime` package is not installed. """ try: from rapidocr_onnxruntime import RapidOCR except ImportError: raise ImportError( "`rapidocr-onnxruntime` package not found, please install it with " "`pip install rapidocr-onnxruntime`" ) ocr = RapidOCR() text = "" for img in images: result, _ = ocr(img) if result: result = [text[1] for text in result] text += "\n".join(result) return text class PyPDFParser(BaseBlobParser):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,068
feat: Add Linearized output to Textract PDFLoader
### Feature request Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints. Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call ### Motivation Improve downstream LLM accuracy ### Your contribution I'll submit a PR for this feature.
https://github.com/langchain-ai/langchain/issues/12068
https://github.com/langchain-ai/langchain/pull/12446
a7d5e0ce8a30bd81b8f7b544a4859c31d5f25445
0c7f1d8b219e87e3ffd14a15a452622c532c7e95
"2023-10-20T08:28:07Z"
python
"2023-10-31T01:02:10Z"
libs/langchain/langchain/document_loaders/parsers/pdf.py
"""Load `PDF` using `pypdf`""" def __init__( self, password: Optional[Union[str, bytes]] = None, extract_images: bool = False ): self.password = password self.extract_images = extract_images def lazy_parse(self, blob: Blob) -> Iterator[Document]: """Lazily parse the blob.""" import pypdf with blob.as_bytes_io() as pdf_file_obj: pdf_reader = pypdf.PdfReader(pdf_file_obj, password=self.password) yield from [ Document( page_content=page.extract_text() + self._extract_images_from_page(page), metadata={"source": blob.source, "page": page_number}, ) for page_number, page in enumerate(pdf_reader.pages) ] def _extract_images_from_page(self, page: pypdf._page.PageObject) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,068
feat: Add Linearized output to Textract PDFLoader
### Feature request Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints. Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call ### Motivation Improve downstream LLM accuracy ### Your contribution I'll submit a PR for this feature.
https://github.com/langchain-ai/langchain/issues/12068
https://github.com/langchain-ai/langchain/pull/12446
a7d5e0ce8a30bd81b8f7b544a4859c31d5f25445
0c7f1d8b219e87e3ffd14a15a452622c532c7e95
"2023-10-20T08:28:07Z"
python
"2023-10-31T01:02:10Z"
libs/langchain/langchain/document_loaders/parsers/pdf.py
"""Extract images from page and get the text with RapidOCR.""" if not self.extract_images or "/XObject" not in page["/Resources"].keys(): return "" xObject = page["/Resources"]["/XObject"].get_object() images = [] for obj in xObject: if xObject[obj]["/Subtype"] == "/Image": if xObject[obj]["/Filter"][1:] in _PDF_FILTER_WITHOUT_LOSS: height, width = xObject[obj]["/Height"], xObject[obj]["/Width"] images.append( np.frombuffer(xObject[obj].get_data(), dtype=np.uint8).reshape( height, width, -1 ) ) elif xObject[obj]["/Filter"][1:] in _PDF_FILTER_WITH_LOSS: images.append(xObject[obj].get_data()) else: warnings.warn("Unknown PDF Filter!") return extract_from_images_with_rapidocr(images) class PDFMinerParser(BaseBlobParser):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,068
feat: Add Linearized output to Textract PDFLoader
### Feature request Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints. Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call ### Motivation Improve downstream LLM accuracy ### Your contribution I'll submit a PR for this feature.
https://github.com/langchain-ai/langchain/issues/12068
https://github.com/langchain-ai/langchain/pull/12446
a7d5e0ce8a30bd81b8f7b544a4859c31d5f25445
0c7f1d8b219e87e3ffd14a15a452622c532c7e95
"2023-10-20T08:28:07Z"
python
"2023-10-31T01:02:10Z"
libs/langchain/langchain/document_loaders/parsers/pdf.py
"""Parse `PDF` using `PDFMiner`.""" def __init__(self, extract_images: bool = False): self.extract_images = extract_images def lazy_parse(self, blob: Blob) -> Iterator[Document]: """Lazily parse the blob.""" if not self.extract_images: from pdfminer.high_level import extract_text with blob.as_bytes_io() as pdf_file_obj: text = extract_text(pdf_file_obj) metadata = {"source": blob.source} yield Document(page_content=text, metadata=metadata) else: import io from pdfminer.converter import PDFPageAggregator, TextConverter from pdfminer.layout import LAParams from pdfminer.pdfinterp import PDFPageInterpreter, PDFResourceManager from pdfminer.pdfpage import PDFPage text_io = io.StringIO() with blob.as_bytes_io() as pdf_file_obj: pages = PDFPage.get_pages(pdf_file_obj) rsrcmgr = PDFResourceManager() device_for_text = TextConverter(rsrcmgr, text_io, laparams=LAParams()) device_for_image = PDFPageAggregator(rsrcmgr, laparams=LAParams()) interpreter_for_text = PDFPageInterpreter(rsrcmgr, device_for_text) interpreter_for_image = PDFPageInterpreter(rsrcmgr, device_for_image) for i, page in enumerate(pages): interpreter_for_text.process_page(page) interpreter_for_image.process_page(page) content = text_io.getvalue() + self._extract_images_from_page( device_for_image.get_result()
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,068
feat: Add Linearized output to Textract PDFLoader
### Feature request Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints. Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call ### Motivation Improve downstream LLM accuracy ### Your contribution I'll submit a PR for this feature.
https://github.com/langchain-ai/langchain/issues/12068
https://github.com/langchain-ai/langchain/pull/12446
a7d5e0ce8a30bd81b8f7b544a4859c31d5f25445
0c7f1d8b219e87e3ffd14a15a452622c532c7e95
"2023-10-20T08:28:07Z"
python
"2023-10-31T01:02:10Z"
libs/langchain/langchain/document_loaders/parsers/pdf.py
) text_io.truncate(0) text_io.seek(0) metadata = {"source": blob.source, "page": str(i)} yield Document(page_content=content, metadata=metadata) def _extract_images_from_page(self, page: pdfminer.layout.LTPage) -> str: """Extract images from page and get the text with RapidOCR.""" import pdfminer def get_image(layout_object: Any) -> Any: if isinstance(layout_object, pdfminer.layout.LTImage): return layout_object if isinstance(layout_object, pdfminer.layout.LTContainer): for child in layout_object: return get_image(child) else: return None images = [] for img in list(filter(bool, map(get_image, page))): if img.stream["Filter"].name in _PDF_FILTER_WITHOUT_LOSS: images.append( np.frombuffer(img.stream.get_data(), dtype=np.uint8).reshape( img.stream["Height"], img.stream["Width"], -1 ) ) elif img.stream["Filter"].name in _PDF_FILTER_WITH_LOSS: images.append(img.stream.get_data()) else: warnings.warn("Unknown PDF Filter!") return extract_from_images_with_rapidocr(images) class PyMuPDFParser(BaseBlobParser):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,068
feat: Add Linearized output to Textract PDFLoader
### Feature request Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints. Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call ### Motivation Improve downstream LLM accuracy ### Your contribution I'll submit a PR for this feature.
https://github.com/langchain-ai/langchain/issues/12068
https://github.com/langchain-ai/langchain/pull/12446
a7d5e0ce8a30bd81b8f7b544a4859c31d5f25445
0c7f1d8b219e87e3ffd14a15a452622c532c7e95
"2023-10-20T08:28:07Z"
python
"2023-10-31T01:02:10Z"
libs/langchain/langchain/document_loaders/parsers/pdf.py
"""Parse `PDF` using `PyMuPDF`.""" def __init__( self, text_kwargs: Optional[Mapping[str, Any]] = None, extract_images: bool = False, ) -> None: """Initialize the parser. Args: text_kwargs: Keyword arguments to pass to ``fitz.Page.get_text()``. """ self.text_kwargs = text_kwargs or {} self.extract_images = extract_images def lazy_parse(self, blob: Blob) -> Iterator[Document]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,068
feat: Add Linearized output to Textract PDFLoader
### Feature request Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints. Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call ### Motivation Improve downstream LLM accuracy ### Your contribution I'll submit a PR for this feature.
https://github.com/langchain-ai/langchain/issues/12068
https://github.com/langchain-ai/langchain/pull/12446
a7d5e0ce8a30bd81b8f7b544a4859c31d5f25445
0c7f1d8b219e87e3ffd14a15a452622c532c7e95
"2023-10-20T08:28:07Z"
python
"2023-10-31T01:02:10Z"
libs/langchain/langchain/document_loaders/parsers/pdf.py
"""Lazily parse the blob.""" import fitz with blob.as_bytes_io() as file_path: doc = fitz.open(file_path) yield from [ Document( page_content=page.get_text(**self.text_kwargs) + self._extract_images_from_page(doc, page), metadata=dict( { "source": blob.source, "file_path": blob.source, "page": page.number, "total_pages": len(doc), }, **{ k: doc.metadata[k] for k in doc.metadata if type(doc.metadata[k]) in [str, int] }, ), ) for page in doc ] def _extract_images_from_page(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,068
feat: Add Linearized output to Textract PDFLoader
### Feature request Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints. Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call ### Motivation Improve downstream LLM accuracy ### Your contribution I'll submit a PR for this feature.
https://github.com/langchain-ai/langchain/issues/12068
https://github.com/langchain-ai/langchain/pull/12446
a7d5e0ce8a30bd81b8f7b544a4859c31d5f25445
0c7f1d8b219e87e3ffd14a15a452622c532c7e95
"2023-10-20T08:28:07Z"
python
"2023-10-31T01:02:10Z"
libs/langchain/langchain/document_loaders/parsers/pdf.py
self, doc: fitz.fitz.Document, page: fitz.fitz.Page ) -> str: """Extract images from page and get the text with RapidOCR.""" if not self.extract_images: return "" import fitz img_list = page.get_images() imgs = [] for img in img_list: xref = img[0] pix = fitz.Pixmap(doc, xref) imgs.append( np.frombuffer(pix.samples, dtype=np.uint8).reshape( pix.height, pix.width, -1 ) ) return extract_from_images_with_rapidocr(imgs) class PyPDFium2Parser(BaseBlobParser): """Parse `PDF` with `PyPDFium2`.""" def __init__(self, extract_images: bool = False) -> None: """Initialize the parser.""" try: import pypdfium2 except ImportError: raise ImportError( "pypdfium2 package not found, please install it with" " `pip install pypdfium2`" ) self.extract_images = extract_images def lazy_parse(self, blob: Blob) -> Iterator[Document]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,068
feat: Add Linearized output to Textract PDFLoader
### Feature request Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints. Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call ### Motivation Improve downstream LLM accuracy ### Your contribution I'll submit a PR for this feature.
https://github.com/langchain-ai/langchain/issues/12068
https://github.com/langchain-ai/langchain/pull/12446
a7d5e0ce8a30bd81b8f7b544a4859c31d5f25445
0c7f1d8b219e87e3ffd14a15a452622c532c7e95
"2023-10-20T08:28:07Z"
python
"2023-10-31T01:02:10Z"
libs/langchain/langchain/document_loaders/parsers/pdf.py
"""Lazily parse the blob.""" import pypdfium2 with blob.as_bytes_io() as file_path: pdf_reader = pypdfium2.PdfDocument(file_path, autoclose=True) try: for page_number, page in enumerate(pdf_reader): text_page = page.get_textpage() content = text_page.get_text_range() text_page.close() content += "\n" + self._extract_images_from_page(page) page.close() metadata = {"source": blob.source, "page": page_number} yield Document(page_content=content, metadata=metadata) finally: pdf_reader.close() def _extract_images_from_page(self, page: pypdfium2._helpers.page.PdfPage) -> str: """Extract images from page and get the text with RapidOCR.""" if not self.extract_images: return "" import pypdfium2.raw as pdfium_c images = list(page.get_objects(filter=(pdfium_c.FPDF_PAGEOBJ_IMAGE,))) images = list(map(lambda x: x.get_bitmap().to_numpy(), images)) return extract_from_images_with_rapidocr(images) class PDFPlumberParser(BaseBlobParser):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,068
feat: Add Linearized output to Textract PDFLoader
### Feature request Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints. Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call ### Motivation Improve downstream LLM accuracy ### Your contribution I'll submit a PR for this feature.
https://github.com/langchain-ai/langchain/issues/12068
https://github.com/langchain-ai/langchain/pull/12446
a7d5e0ce8a30bd81b8f7b544a4859c31d5f25445
0c7f1d8b219e87e3ffd14a15a452622c532c7e95
"2023-10-20T08:28:07Z"
python
"2023-10-31T01:02:10Z"
libs/langchain/langchain/document_loaders/parsers/pdf.py
"""Parse `PDF` with `PDFPlumber`.""" def __init__( self, text_kwargs: Optional[Mapping[str, Any]] = None, dedupe: bool = False, extract_images: bool = False, ) -> None: """Initialize the parser. Args: text_kwargs: Keyword arguments to pass to ``pdfplumber.Page.extract_text()`` dedupe: Avoiding the error of duplicate characters if `dedupe=True`. """ self.text_kwargs = text_kwargs or {} self.dedupe = dedupe self.extract_images = extract_images def lazy_parse(self, blob: Blob) -> Iterator[Document]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,068
feat: Add Linearized output to Textract PDFLoader
### Feature request Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints. Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call ### Motivation Improve downstream LLM accuracy ### Your contribution I'll submit a PR for this feature.
https://github.com/langchain-ai/langchain/issues/12068
https://github.com/langchain-ai/langchain/pull/12446
a7d5e0ce8a30bd81b8f7b544a4859c31d5f25445
0c7f1d8b219e87e3ffd14a15a452622c532c7e95
"2023-10-20T08:28:07Z"
python
"2023-10-31T01:02:10Z"
libs/langchain/langchain/document_loaders/parsers/pdf.py
"""Lazily parse the blob.""" import pdfplumber with blob.as_bytes_io() as file_path: doc = pdfplumber.open(file_path) yield from [ Document( page_content=self._process_page_content(page) + "\n" + self._extract_images_from_page(page), metadata=dict( { "source": blob.source, "file_path": blob.source, "page": page.page_number - 1, "total_pages": len(doc.pages), }, **{ k: doc.metadata[k] for k in doc.metadata if type(doc.metadata[k]) in [str, int] }, ), ) for page in doc.pages ] def _process_page_content(self, page: pdfplumber.page.Page) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,068
feat: Add Linearized output to Textract PDFLoader
### Feature request Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints. Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call ### Motivation Improve downstream LLM accuracy ### Your contribution I'll submit a PR for this feature.
https://github.com/langchain-ai/langchain/issues/12068
https://github.com/langchain-ai/langchain/pull/12446
a7d5e0ce8a30bd81b8f7b544a4859c31d5f25445
0c7f1d8b219e87e3ffd14a15a452622c532c7e95
"2023-10-20T08:28:07Z"
python
"2023-10-31T01:02:10Z"
libs/langchain/langchain/document_loaders/parsers/pdf.py
"""Process the page content based on dedupe.""" if self.dedupe: return page.dedupe_chars().extract_text(**self.text_kwargs) return page.extract_text(**self.text_kwargs) def _extract_images_from_page(self, page: pdfplumber.page.Page) -> str: """Extract images from page and get the text with RapidOCR.""" if not self.extract_images: return "" images = [] for img in page.images: if img["stream"]["Filter"].name in _PDF_FILTER_WITHOUT_LOSS: images.append( np.frombuffer(img["stream"].get_data(), dtype=np.uint8).reshape( img["stream"]["Height"], img["stream"]["Width"], -1 ) ) elif img["stream"]["Filter"].name in _PDF_FILTER_WITH_LOSS: images.append(img["stream"].get_data()) else: warnings.warn("Unknown PDF Filter!") return extract_from_images_with_rapidocr(images) class AmazonTextractPDFParser(BaseBlobParser):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,068
feat: Add Linearized output to Textract PDFLoader
### Feature request Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints. Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call ### Motivation Improve downstream LLM accuracy ### Your contribution I'll submit a PR for this feature.
https://github.com/langchain-ai/langchain/issues/12068
https://github.com/langchain-ai/langchain/pull/12446
a7d5e0ce8a30bd81b8f7b544a4859c31d5f25445
0c7f1d8b219e87e3ffd14a15a452622c532c7e95
"2023-10-20T08:28:07Z"
python
"2023-10-31T01:02:10Z"
libs/langchain/langchain/document_loaders/parsers/pdf.py
"""Send `PDF` files to `Amazon Textract` and parse them. For parsing multi-page PDFs, they have to reside on S3. """ def __init__( self, textract_features: Optional[Sequence[int]] = None, client: Optional[Any] = None, ) -> None: """Initializes the parser. Args: textract_features: Features to be used for extraction, each feature should be passed as an int that conforms to the enum `Textract_Features`, see `amazon-textract-caller` pkg client: boto3 textract client """ try: import textractcaller as tc self.tc = tc if textract_features is not None: self.textract_features = [ tc.Textract_Features(f) for f in textract_features
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,068
feat: Add Linearized output to Textract PDFLoader
### Feature request Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints. Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call ### Motivation Improve downstream LLM accuracy ### Your contribution I'll submit a PR for this feature.
https://github.com/langchain-ai/langchain/issues/12068
https://github.com/langchain-ai/langchain/pull/12446
a7d5e0ce8a30bd81b8f7b544a4859c31d5f25445
0c7f1d8b219e87e3ffd14a15a452622c532c7e95
"2023-10-20T08:28:07Z"
python
"2023-10-31T01:02:10Z"
libs/langchain/langchain/document_loaders/parsers/pdf.py
] else: self.textract_features = [] except ImportError: raise ImportError( "Could not import amazon-textract-caller python package. " "Please install it with `pip install amazon-textract-caller`." ) if not client: try: import boto3 self.boto3_textract_client = boto3.client("textract") except ImportError: raise ImportError( "Could not import boto3 python package. " "Please install it with `pip install boto3`." ) else: self.boto3_textract_client = client def lazy_parse(self, blob: Blob) -> Iterator[Document]: """Iterates over the Blob pages and returns an Iterator with a Document for each page, like the other parsers If multi-page document, blob.path has to be set to the S3 URI and for single page docs the blob.data is taken """ url_parse_result = urlparse(str(blob.path)) if blob.path else None if ( url_parse_result and url_parse_result.scheme == "s3" and url_parse_result.netloc
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,068
feat: Add Linearized output to Textract PDFLoader
### Feature request Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints. Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call ### Motivation Improve downstream LLM accuracy ### Your contribution I'll submit a PR for this feature.
https://github.com/langchain-ai/langchain/issues/12068
https://github.com/langchain-ai/langchain/pull/12446
a7d5e0ce8a30bd81b8f7b544a4859c31d5f25445
0c7f1d8b219e87e3ffd14a15a452622c532c7e95
"2023-10-20T08:28:07Z"
python
"2023-10-31T01:02:10Z"
libs/langchain/langchain/document_loaders/parsers/pdf.py
): textract_response_json = self.tc.call_textract( input_document=str(blob.path), features=self.textract_features, boto3_textract_client=self.boto3_textract_client, ) else: textract_response_json = self.tc.call_textract( input_document=blob.as_bytes(), features=self.textract_features, call_mode=self.tc.Textract_Call_Mode.FORCE_SYNC, boto3_textract_client=self.boto3_textract_client, ) current_text = "" current_page = 1 for block in textract_response_json["Blocks"]: if "Page" in block and not (int(block["Page"]) == current_page): yield Document( page_content=current_text, metadata={"source": blob.source, "page": current_page}, ) current_text = "" current_page = int(block["Page"]) if "Text" in block: current_text += block["Text"] + " " yield Document( page_content=current_text, metadata={"source": blob.source, "page": current_page}, ) class DocumentIntelligenceParser(BaseBlobParser):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,068
feat: Add Linearized output to Textract PDFLoader
### Feature request Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints. Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call ### Motivation Improve downstream LLM accuracy ### Your contribution I'll submit a PR for this feature.
https://github.com/langchain-ai/langchain/issues/12068
https://github.com/langchain-ai/langchain/pull/12446
a7d5e0ce8a30bd81b8f7b544a4859c31d5f25445
0c7f1d8b219e87e3ffd14a15a452622c532c7e95
"2023-10-20T08:28:07Z"
python
"2023-10-31T01:02:10Z"
libs/langchain/langchain/document_loaders/parsers/pdf.py
"""Loads a PDF with Azure Document Intelligence (formerly Forms Recognizer) and chunks at character level.""" def __init__(self, client: Any, model: str): self.client = client self.model = model def _generate_docs(self, blob: Blob, result: Any) -> Iterator[Document]: for p in result.pages: content = " ".join([line.content for line in p.lines]) d = Document( page_content=content, metadata={ "source": blob.source, "page": p.page_number, }, ) yield d def lazy_parse(self, blob: Blob) -> Iterator[Document]: """Lazily parse the blob.""" with blob.as_bytes_io() as file_obj: poller = self.client.begin_analyze_document(self.model, file_obj) result = poller.result() docs = self._generate_docs(blob, result) yield from docs
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,068
feat: Add Linearized output to Textract PDFLoader
### Feature request Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints. Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call ### Motivation Improve downstream LLM accuracy ### Your contribution I'll submit a PR for this feature.
https://github.com/langchain-ai/langchain/issues/12068
https://github.com/langchain-ai/langchain/pull/12446
a7d5e0ce8a30bd81b8f7b544a4859c31d5f25445
0c7f1d8b219e87e3ffd14a15a452622c532c7e95
"2023-10-20T08:28:07Z"
python
"2023-10-31T01:02:10Z"
libs/langchain/tests/integration_tests/document_loaders/test_pdf.py
from pathlib import Path from typing import Sequence, Union import pytest from langchain.document_loaders import ( AmazonTextractPDFLoader, MathpixPDFLoader, PDFMinerLoader, PDFMinerPDFasHTMLLoader, PyMuPDFLoader, PyPDFium2Loader, PyPDFLoader, UnstructuredPDFLoader, ) def test_unstructured_pdf_loader_elements_mode() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,068
feat: Add Linearized output to Textract PDFLoader
### Feature request Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints. Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call ### Motivation Improve downstream LLM accuracy ### Your contribution I'll submit a PR for this feature.
https://github.com/langchain-ai/langchain/issues/12068
https://github.com/langchain-ai/langchain/pull/12446
a7d5e0ce8a30bd81b8f7b544a4859c31d5f25445
0c7f1d8b219e87e3ffd14a15a452622c532c7e95
"2023-10-20T08:28:07Z"
python
"2023-10-31T01:02:10Z"
libs/langchain/tests/integration_tests/document_loaders/test_pdf.py
"""Test unstructured loader with various modes.""" file_path = Path(__file__).parent.parent / "examples/hello.pdf" loader = UnstructuredPDFLoader(str(file_path), mode="elements") docs = loader.load() assert len(docs) == 2 def test_unstructured_pdf_loader_paged_mode() -> None: """Test unstructured loader with various modes.""" file_path = Path(__file__).parent.parent / "examples/layout-parser-paper.pdf" loader = UnstructuredPDFLoader(str(file_path), mode="paged") docs = loader.load() assert len(docs) == 16 def test_unstructured_pdf_loader_default_mode() -> None: """Test unstructured loader.""" file_path = Path(__file__).parent.parent / "examples/hello.pdf" loader = UnstructuredPDFLoader(str(file_path)) docs = loader.load() assert len(docs) == 1 def test_pdfminer_loader() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,068
feat: Add Linearized output to Textract PDFLoader
### Feature request Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints. Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call ### Motivation Improve downstream LLM accuracy ### Your contribution I'll submit a PR for this feature.
https://github.com/langchain-ai/langchain/issues/12068
https://github.com/langchain-ai/langchain/pull/12446
a7d5e0ce8a30bd81b8f7b544a4859c31d5f25445
0c7f1d8b219e87e3ffd14a15a452622c532c7e95
"2023-10-20T08:28:07Z"
python
"2023-10-31T01:02:10Z"
libs/langchain/tests/integration_tests/document_loaders/test_pdf.py
"""Test PDFMiner loader.""" file_path = Path(__file__).parent.parent / "examples/hello.pdf" loader = PDFMinerLoader(str(file_path)) docs = loader.load() assert len(docs) == 1 file_path = Path(__file__).parent.parent / "examples/layout-parser-paper.pdf" loader = PDFMinerLoader(str(file_path)) docs = loader.load() assert len(docs) == 1 def test_pdfminer_pdf_as_html_loader() -> None: """Test PDFMinerPDFasHTMLLoader.""" file_path = Path(__file__).parent.parent / "examples/hello.pdf" loader = PDFMinerPDFasHTMLLoader(str(file_path)) docs = loader.load() assert len(docs) == 1 file_path = Path(__file__).parent.parent / "examples/layout-parser-paper.pdf" loader = PDFMinerPDFasHTMLLoader(str(file_path)) docs = loader.load() assert len(docs) == 1 def test_pypdf_loader() -> None: """Test PyPDFLoader.""" file_path = Path(__file__).parent.parent / "examples/hello.pdf" loader = PyPDFLoader(str(file_path)) docs = loader.load() assert len(docs) == 1 file_path = Path(__file__).parent.parent / "examples/layout-parser-paper.pdf" loader = PyPDFLoader(str(file_path)) docs = loader.load() assert len(docs) == 16 def test_pypdfium2_loader() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,068
feat: Add Linearized output to Textract PDFLoader
### Feature request Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints. Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call ### Motivation Improve downstream LLM accuracy ### Your contribution I'll submit a PR for this feature.
https://github.com/langchain-ai/langchain/issues/12068
https://github.com/langchain-ai/langchain/pull/12446
a7d5e0ce8a30bd81b8f7b544a4859c31d5f25445
0c7f1d8b219e87e3ffd14a15a452622c532c7e95
"2023-10-20T08:28:07Z"
python
"2023-10-31T01:02:10Z"
libs/langchain/tests/integration_tests/document_loaders/test_pdf.py
"""Test PyPDFium2Loader.""" file_path = Path(__file__).parent.parent / "examples/hello.pdf" loader = PyPDFium2Loader(str(file_path)) docs = loader.load() assert len(docs) == 1 file_path = Path(__file__).parent.parent / "examples/layout-parser-paper.pdf" loader = PyPDFium2Loader(str(file_path)) docs = loader.load() assert len(docs) == 16 def test_pymupdf_loader() -> None: """Test PyMuPDF loader.""" file_path = Path(__file__).parent.parent / "examples/hello.pdf" loader = PyMuPDFLoader(str(file_path)) docs = loader.load() assert len(docs) == 1 file_path = Path(__file__).parent.parent / "examples/layout-parser-paper.pdf" loader = PyMuPDFLoader(str(file_path)) docs = loader.load() assert len(docs) == 16 assert loader.web_path is None web_path = "https://people.sc.fsu.edu/~jpeterson/hello_world.pdf" loader = PyMuPDFLoader(web_path) docs = loader.load() assert loader.web_path == web_path assert loader.file_path != web_path assert len(docs) == 1 def test_mathpix_loader() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,068
feat: Add Linearized output to Textract PDFLoader
### Feature request Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints. Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call ### Motivation Improve downstream LLM accuracy ### Your contribution I'll submit a PR for this feature.
https://github.com/langchain-ai/langchain/issues/12068
https://github.com/langchain-ai/langchain/pull/12446
a7d5e0ce8a30bd81b8f7b544a4859c31d5f25445
0c7f1d8b219e87e3ffd14a15a452622c532c7e95
"2023-10-20T08:28:07Z"
python
"2023-10-31T01:02:10Z"
libs/langchain/tests/integration_tests/document_loaders/test_pdf.py
file_path = Path(__file__).parent.parent / "examples/hello.pdf" loader = MathpixPDFLoader(str(file_path)) docs = loader.load() assert len(docs) == 1 print(docs[0].page_content) file_path = Path(__file__).parent.parent / "examples/layout-parser-paper.pdf" loader = MathpixPDFLoader(str(file_path)) docs = loader.load() assert len(docs) == 1 print(docs[0].page_content) @pytest.mark.parametrize( "file_path, features, docs_length, create_client", [ ( ( "https://amazon-textract-public-content.s3.us-east-2.amazonaws.com" "/langchain/alejandro_rosalez_sample_1.jpg" ), ["FORMS", "TABLES"], 1, False, ), (str(Path(__file__).parent.parent / "examples/hello.pdf"), ["FORMS"], 1, False), ( "s3://amazon-textract-public-content/langchain/layout-parser-paper.pdf", None, 16, True,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,068
feat: Add Linearized output to Textract PDFLoader
### Feature request Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints. Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call ### Motivation Improve downstream LLM accuracy ### Your contribution I'll submit a PR for this feature.
https://github.com/langchain-ai/langchain/issues/12068
https://github.com/langchain-ai/langchain/pull/12446
a7d5e0ce8a30bd81b8f7b544a4859c31d5f25445
0c7f1d8b219e87e3ffd14a15a452622c532c7e95
"2023-10-20T08:28:07Z"
python
"2023-10-31T01:02:10Z"
libs/langchain/tests/integration_tests/document_loaders/test_pdf.py
), ], ) @pytest.mark.skip(reason="Requires AWS credentials to run") def test_amazontextract_loader( file_path: str, features: Union[Sequence[str], None], docs_length: int, create_client: bool, ) -> None: if create_client: import boto3 textract_client = boto3.client("textract", region_name="us-east-2") loader = AmazonTextractPDFLoader( file_path, textract_features=features, client=textract_client ) else: loader = AmazonTextractPDFLoader(file_path, textract_features=features) docs = loader.load() assert len(docs) == docs_length @pytest.mark.skip(reason="Requires AWS credentials to run") def test_amazontextract_loader_failures() -> None: two_page_pdf = str( Path(__file__).parent.parent / "examples/multi-page-forms-sample-2-page.pdf" ) loader = AmazonTextractPDFLoader(two_page_pdf) with pytest.raises(ValueError): loader.load()
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,943
OpenAIEmbeddings() does not work because of these bugs
### System Info Python Version: 3.11 LangChain Version: 0.0.331 OpenAI Version: 1.0.0 ### Who can help? @hwchase17, @agola11, @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The following error has been caused due to the recent change in version of OpenAI to 1.0.0 **Use OpenAI==0.28.1 to fix this error** With the code: `embeddings = OpenAIEmbeddings()` The error produced is: `AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?` I went through the `langchain/embeddings/openai.py` file and then changed `value["client"] = openai.Embedding` to `value["client"] = openai.embeddings`, but then I receive this new error: `AttributeError: module 'openai' has no attribute 'error'` in the same file (`langchain/embeddings/openai.py`) ### Expected behavior There should be no error when calling this function.
https://github.com/langchain-ai/langchain/issues/12943
https://github.com/langchain-ai/langchain/pull/12969
fdbb45d79e69485e0892dadf48b32dc8efadde9b
0c81cd923e04bb68fdf3ad299946d7fa85a21f9f
"2023-11-06T17:56:29Z"
python
"2023-11-07T02:52:33Z"
libs/langchain/langchain/embeddings/openai.py
from __future__ import annotations import logging import warnings from typing import ( Any, Callable, Dict, List, Literal, Optional, Sequence, Set, Tuple, Union, ) import numpy as np from tenacity import ( AsyncRetrying, before_sleep_log, retry, retry_if_exception_type, stop_after_attempt, wait_exponential, ) from langchain.pydantic_v1 import BaseModel, Extra, Field, root_validator from langchain.schema.embeddings import Embeddings from langchain.utils import get_from_dict_or_env, get_pydantic_field_names logger = logging.getLogger(__name__) def _create_retry_decorator(embeddings: OpenAIEmbeddings) -> Callable[[Any], Any]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,943
OpenAIEmbeddings() does not work because of these bugs
### System Info Python Version: 3.11 LangChain Version: 0.0.331 OpenAI Version: 1.0.0 ### Who can help? @hwchase17, @agola11, @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The following error has been caused due to the recent change in version of OpenAI to 1.0.0 **Use OpenAI==0.28.1 to fix this error** With the code: `embeddings = OpenAIEmbeddings()` The error produced is: `AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?` I went through the `langchain/embeddings/openai.py` file and then changed `value["client"] = openai.Embedding` to `value["client"] = openai.embeddings`, but then I receive this new error: `AttributeError: module 'openai' has no attribute 'error'` in the same file (`langchain/embeddings/openai.py`) ### Expected behavior There should be no error when calling this function.
https://github.com/langchain-ai/langchain/issues/12943
https://github.com/langchain-ai/langchain/pull/12969
fdbb45d79e69485e0892dadf48b32dc8efadde9b
0c81cd923e04bb68fdf3ad299946d7fa85a21f9f
"2023-11-06T17:56:29Z"
python
"2023-11-07T02:52:33Z"
libs/langchain/langchain/embeddings/openai.py
import openai min_seconds = 4 max_seconds = 10 return retry( reraise=True, stop=stop_after_attempt(embeddings.max_retries), wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds), retry=( retry_if_exception_type(openai.error.Timeout) | retry_if_exception_type(openai.error.APIError) | retry_if_exception_type(openai.error.APIConnectionError) | retry_if_exception_type(openai.error.RateLimitError) | retry_if_exception_type(openai.error.ServiceUnavailableError) ), before_sleep=before_sleep_log(logger, logging.WARNING), ) def _async_retry_decorator(embeddings: OpenAIEmbeddings) -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,943
OpenAIEmbeddings() does not work because of these bugs
### System Info Python Version: 3.11 LangChain Version: 0.0.331 OpenAI Version: 1.0.0 ### Who can help? @hwchase17, @agola11, @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The following error has been caused due to the recent change in version of OpenAI to 1.0.0 **Use OpenAI==0.28.1 to fix this error** With the code: `embeddings = OpenAIEmbeddings()` The error produced is: `AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?` I went through the `langchain/embeddings/openai.py` file and then changed `value["client"] = openai.Embedding` to `value["client"] = openai.embeddings`, but then I receive this new error: `AttributeError: module 'openai' has no attribute 'error'` in the same file (`langchain/embeddings/openai.py`) ### Expected behavior There should be no error when calling this function.
https://github.com/langchain-ai/langchain/issues/12943
https://github.com/langchain-ai/langchain/pull/12969
fdbb45d79e69485e0892dadf48b32dc8efadde9b
0c81cd923e04bb68fdf3ad299946d7fa85a21f9f
"2023-11-06T17:56:29Z"
python
"2023-11-07T02:52:33Z"
libs/langchain/langchain/embeddings/openai.py
import openai min_seconds = 4 max_seconds = 10 async_retrying = AsyncRetrying( reraise=True, stop=stop_after_attempt(embeddings.max_retries), wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds), retry=( retry_if_exception_type(openai.error.Timeout) | retry_if_exception_type(openai.error.APIError) | retry_if_exception_type(openai.error.APIConnectionError) | retry_if_exception_type(openai.error.RateLimitError) | retry_if_exception_type(openai.error.ServiceUnavailableError) ), before_sleep=before_sleep_log(logger, logging.WARNING), ) def wrap(func: Callable) -> Callable:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,943
OpenAIEmbeddings() does not work because of these bugs
### System Info Python Version: 3.11 LangChain Version: 0.0.331 OpenAI Version: 1.0.0 ### Who can help? @hwchase17, @agola11, @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The following error has been caused due to the recent change in version of OpenAI to 1.0.0 **Use OpenAI==0.28.1 to fix this error** With the code: `embeddings = OpenAIEmbeddings()` The error produced is: `AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?` I went through the `langchain/embeddings/openai.py` file and then changed `value["client"] = openai.Embedding` to `value["client"] = openai.embeddings`, but then I receive this new error: `AttributeError: module 'openai' has no attribute 'error'` in the same file (`langchain/embeddings/openai.py`) ### Expected behavior There should be no error when calling this function.
https://github.com/langchain-ai/langchain/issues/12943
https://github.com/langchain-ai/langchain/pull/12969
fdbb45d79e69485e0892dadf48b32dc8efadde9b
0c81cd923e04bb68fdf3ad299946d7fa85a21f9f
"2023-11-06T17:56:29Z"
python
"2023-11-07T02:52:33Z"
libs/langchain/langchain/embeddings/openai.py
async def wrapped_f(*args: Any, **kwargs: Any) -> Callable: async for _ in async_retrying: return await func(*args, **kwargs) raise AssertionError("this is unreachable") return wrapped_f return wrap def _check_response(response: dict, skip_empty: bool = False) -> dict: if any(len(d["embedding"]) == 1 for d in response["data"]) and not skip_empty: import openai raise openai.error.APIError("OpenAI API returned an empty embedding") return response def embed_with_retry(embeddings: OpenAIEmbeddings, **kwargs: Any) -> Any: """Use tenacity to retry the embedding call.""" retry_decorator = _create_retry_decorator(embeddings) @retry_decorator def _embed_with_retry(**kwargs: Any) -> Any: response = embeddings.client.create(**kwargs) return _check_response(response, skip_empty=embeddings.skip_empty) return _embed_with_retry(**kwargs) async def async_embed_with_retry(embeddings: OpenAIEmbeddings, **kwargs: Any) -> Any: """Use tenacity to retry the embedding call.""" @_async_retry_decorator(embeddings) async def _async_embed_with_retry(**kwargs: Any) -> Any: response = await embeddings.client.acreate(**kwargs) return _check_response(response, skip_empty=embeddings.skip_empty) return await _async_embed_with_retry(**kwargs) class OpenAIEmbeddings(BaseModel, Embeddings):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,943
OpenAIEmbeddings() does not work because of these bugs
### System Info Python Version: 3.11 LangChain Version: 0.0.331 OpenAI Version: 1.0.0 ### Who can help? @hwchase17, @agola11, @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The following error has been caused due to the recent change in version of OpenAI to 1.0.0 **Use OpenAI==0.28.1 to fix this error** With the code: `embeddings = OpenAIEmbeddings()` The error produced is: `AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?` I went through the `langchain/embeddings/openai.py` file and then changed `value["client"] = openai.Embedding` to `value["client"] = openai.embeddings`, but then I receive this new error: `AttributeError: module 'openai' has no attribute 'error'` in the same file (`langchain/embeddings/openai.py`) ### Expected behavior There should be no error when calling this function.
https://github.com/langchain-ai/langchain/issues/12943
https://github.com/langchain-ai/langchain/pull/12969
fdbb45d79e69485e0892dadf48b32dc8efadde9b
0c81cd923e04bb68fdf3ad299946d7fa85a21f9f
"2023-11-06T17:56:29Z"
python
"2023-11-07T02:52:33Z"
libs/langchain/langchain/embeddings/openai.py
"""OpenAI embedding models. To use, you should have the ``openai`` python package installed, and the environment variable ``OPENAI_API_KEY`` set with your API key or pass it as a named parameter to the constructor. Example: .. code-block:: python from langchain.embeddings import OpenAIEmbeddings openai = OpenAIEmbeddings(openai_api_key="my-api-key") In order to use the library with Microsoft Azure endpoints, you need to set the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and OPENAI_API_VERSION. The OPENAI_API_TYPE must be set to 'azure' and the others correspond to the properties of your endpoint.
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,943
OpenAIEmbeddings() does not work because of these bugs
### System Info Python Version: 3.11 LangChain Version: 0.0.331 OpenAI Version: 1.0.0 ### Who can help? @hwchase17, @agola11, @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The following error has been caused due to the recent change in version of OpenAI to 1.0.0 **Use OpenAI==0.28.1 to fix this error** With the code: `embeddings = OpenAIEmbeddings()` The error produced is: `AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?` I went through the `langchain/embeddings/openai.py` file and then changed `value["client"] = openai.Embedding` to `value["client"] = openai.embeddings`, but then I receive this new error: `AttributeError: module 'openai' has no attribute 'error'` in the same file (`langchain/embeddings/openai.py`) ### Expected behavior There should be no error when calling this function.
https://github.com/langchain-ai/langchain/issues/12943
https://github.com/langchain-ai/langchain/pull/12969
fdbb45d79e69485e0892dadf48b32dc8efadde9b
0c81cd923e04bb68fdf3ad299946d7fa85a21f9f
"2023-11-06T17:56:29Z"
python
"2023-11-07T02:52:33Z"
libs/langchain/langchain/embeddings/openai.py
In addition, the deployment name must be passed as the model parameter. Example: .. code-block:: python import os os.environ["OPENAI_API_TYPE"] = "azure" os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/" os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key" os.environ["OPENAI_API_VERSION"] = "2023-05-15" os.environ["OPENAI_PROXY"] = "http://your-corporate-proxy:8080" from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings( deployment="your-embeddings-deployment-name", model="your-embeddings-model-name", openai_api_base="https://your-endpoint.openai.azure.com/", openai_api_type="azure", ) text = "This is a test query." query_result = embeddings.embed_query(text) """ client: Any = None model: str = "text-embedding-ada-002" deployment: str = model openai_api_version: Optional[str] = None openai_api_base: Optional[str] = None openai_api_type: Optional[str] = None openai_proxy: Optional[str] = None embedding_ctx_length: int = 8191
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,943
OpenAIEmbeddings() does not work because of these bugs
### System Info Python Version: 3.11 LangChain Version: 0.0.331 OpenAI Version: 1.0.0 ### Who can help? @hwchase17, @agola11, @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The following error has been caused due to the recent change in version of OpenAI to 1.0.0 **Use OpenAI==0.28.1 to fix this error** With the code: `embeddings = OpenAIEmbeddings()` The error produced is: `AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?` I went through the `langchain/embeddings/openai.py` file and then changed `value["client"] = openai.Embedding` to `value["client"] = openai.embeddings`, but then I receive this new error: `AttributeError: module 'openai' has no attribute 'error'` in the same file (`langchain/embeddings/openai.py`) ### Expected behavior There should be no error when calling this function.
https://github.com/langchain-ai/langchain/issues/12943
https://github.com/langchain-ai/langchain/pull/12969
fdbb45d79e69485e0892dadf48b32dc8efadde9b
0c81cd923e04bb68fdf3ad299946d7fa85a21f9f
"2023-11-06T17:56:29Z"
python
"2023-11-07T02:52:33Z"
libs/langchain/langchain/embeddings/openai.py
"""The maximum number of tokens to embed at once.""" openai_api_key: Optional[str] = None openai_organization: Optional[str] = None allowed_special: Union[Literal["all"], Set[str]] = set() disallowed_special: Union[Literal["all"], Set[str], Sequence[str]] = "all" chunk_size: int = 1000 """Maximum number of texts to embed in each batch""" max_retries: int = 6 """Maximum number of retries to make when generating.""" request_timeout: Optional[Union[float, Tuple[float, float]]] = None """Timeout in seconds for the OpenAPI request.""" headers: Any = None tiktoken_model_name: Optional[str] = None """The model name to pass to tiktoken when using this class. Tiktoken is used to count the number of tokens in documents to constrain them to be under a certain limit. By default, when set to None, this will be the same as the embedding model name. However, there are some cases where you may want to use this Embedding class with a model name not supported by tiktoken. This can include when using Azure embeddings or when using one of the many model providers that expose an OpenAI-like API but with different models. In those cases, in order to avoid erroring when tiktoken is called, you can specify a model name to use here.""" show_progress_bar: bool = False """Whether to show a progress bar when embedding.""" model_kwargs: Dict[str, Any] = Field(default_factory=dict) """Holds any model parameters valid for `create` call not explicitly specified.""" skip_empty: bool = False """Whether to skip empty strings when embedding or raise an error. Defaults to not skipping.""" class Config:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,943
OpenAIEmbeddings() does not work because of these bugs
### System Info Python Version: 3.11 LangChain Version: 0.0.331 OpenAI Version: 1.0.0 ### Who can help? @hwchase17, @agola11, @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The following error has been caused due to the recent change in version of OpenAI to 1.0.0 **Use OpenAI==0.28.1 to fix this error** With the code: `embeddings = OpenAIEmbeddings()` The error produced is: `AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?` I went through the `langchain/embeddings/openai.py` file and then changed `value["client"] = openai.Embedding` to `value["client"] = openai.embeddings`, but then I receive this new error: `AttributeError: module 'openai' has no attribute 'error'` in the same file (`langchain/embeddings/openai.py`) ### Expected behavior There should be no error when calling this function.
https://github.com/langchain-ai/langchain/issues/12943
https://github.com/langchain-ai/langchain/pull/12969
fdbb45d79e69485e0892dadf48b32dc8efadde9b
0c81cd923e04bb68fdf3ad299946d7fa85a21f9f
"2023-11-06T17:56:29Z"
python
"2023-11-07T02:52:33Z"
libs/langchain/langchain/embeddings/openai.py
"""Configuration for this pydantic object.""" extra = Extra.forbid @root_validator(pre=True) def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]: """Build extra kwargs from additional params that were passed in.""" all_required_field_names = get_pydantic_field_names(cls) extra = values.get("model_kwargs", {}) for field_name in list(values): if field_name in extra: raise ValueError(f"Found {field_name} supplied twice.") if field_name not in all_required_field_names: warnings.warn( f"""WARNING! {field_name} is not default parameter. {field_name} was transferred to model_kwargs. Please confirm that {field_name} is what you intended.""" ) extra[field_name] = values.pop(field_name) invalid_model_kwargs = all_required_field_names.intersection(extra.keys()) if invalid_model_kwargs: raise ValueError( f"Parameters {invalid_model_kwargs} should be specified explicitly. " f"Instead they were passed in as part of `model_kwargs` parameter." ) values["model_kwargs"] = extra return values @root_validator(pre=True) def validate_environment(cls, values: Dict) -> Dict:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,943
OpenAIEmbeddings() does not work because of these bugs
### System Info Python Version: 3.11 LangChain Version: 0.0.331 OpenAI Version: 1.0.0 ### Who can help? @hwchase17, @agola11, @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The following error has been caused due to the recent change in version of OpenAI to 1.0.0 **Use OpenAI==0.28.1 to fix this error** With the code: `embeddings = OpenAIEmbeddings()` The error produced is: `AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?` I went through the `langchain/embeddings/openai.py` file and then changed `value["client"] = openai.Embedding` to `value["client"] = openai.embeddings`, but then I receive this new error: `AttributeError: module 'openai' has no attribute 'error'` in the same file (`langchain/embeddings/openai.py`) ### Expected behavior There should be no error when calling this function.
https://github.com/langchain-ai/langchain/issues/12943
https://github.com/langchain-ai/langchain/pull/12969
fdbb45d79e69485e0892dadf48b32dc8efadde9b
0c81cd923e04bb68fdf3ad299946d7fa85a21f9f
"2023-11-06T17:56:29Z"
python
"2023-11-07T02:52:33Z"
libs/langchain/langchain/embeddings/openai.py
"""Validate that api key and python package exists in environment.""" values["openai_api_key"] = get_from_dict_or_env( values, "openai_api_key", "OPENAI_API_KEY" ) values["openai_api_base"] = get_from_dict_or_env( values, "openai_api_base", "OPENAI_API_BASE", default="", ) values["openai_api_type"] = get_from_dict_or_env( values, "openai_api_type", "OPENAI_API_TYPE", default="", ) values["openai_proxy"] = get_from_dict_or_env( values, "openai_proxy", "OPENAI_PROXY", default="", ) if values["openai_api_type"] in ("azure", "azure_ad", "azuread"): default_api_version = "2022-12-01"
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,943
OpenAIEmbeddings() does not work because of these bugs
### System Info Python Version: 3.11 LangChain Version: 0.0.331 OpenAI Version: 1.0.0 ### Who can help? @hwchase17, @agola11, @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The following error has been caused due to the recent change in version of OpenAI to 1.0.0 **Use OpenAI==0.28.1 to fix this error** With the code: `embeddings = OpenAIEmbeddings()` The error produced is: `AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?` I went through the `langchain/embeddings/openai.py` file and then changed `value["client"] = openai.Embedding` to `value["client"] = openai.embeddings`, but then I receive this new error: `AttributeError: module 'openai' has no attribute 'error'` in the same file (`langchain/embeddings/openai.py`) ### Expected behavior There should be no error when calling this function.
https://github.com/langchain-ai/langchain/issues/12943
https://github.com/langchain-ai/langchain/pull/12969
fdbb45d79e69485e0892dadf48b32dc8efadde9b
0c81cd923e04bb68fdf3ad299946d7fa85a21f9f
"2023-11-06T17:56:29Z"
python
"2023-11-07T02:52:33Z"
libs/langchain/langchain/embeddings/openai.py
default_chunk_size = 16 else: default_api_version = "" default_chunk_size = 1000 values["openai_api_version"] = get_from_dict_or_env( values, "openai_api_version", "OPENAI_API_VERSION", default=default_api_version, ) values["openai_organization"] = get_from_dict_or_env( values, "openai_organization", "OPENAI_ORGANIZATION", default="", ) if "chunk_size" not in values: values["chunk_size"] = default_chunk_size try: import openai values["client"] = openai.Embedding except ImportError: raise ImportError( "Could not import openai python package. " "Please install it with `pip install openai`." ) return values @property def _invocation_params(self) -> Dict:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,943
OpenAIEmbeddings() does not work because of these bugs
### System Info Python Version: 3.11 LangChain Version: 0.0.331 OpenAI Version: 1.0.0 ### Who can help? @hwchase17, @agola11, @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The following error has been caused due to the recent change in version of OpenAI to 1.0.0 **Use OpenAI==0.28.1 to fix this error** With the code: `embeddings = OpenAIEmbeddings()` The error produced is: `AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?` I went through the `langchain/embeddings/openai.py` file and then changed `value["client"] = openai.Embedding` to `value["client"] = openai.embeddings`, but then I receive this new error: `AttributeError: module 'openai' has no attribute 'error'` in the same file (`langchain/embeddings/openai.py`) ### Expected behavior There should be no error when calling this function.
https://github.com/langchain-ai/langchain/issues/12943
https://github.com/langchain-ai/langchain/pull/12969
fdbb45d79e69485e0892dadf48b32dc8efadde9b
0c81cd923e04bb68fdf3ad299946d7fa85a21f9f
"2023-11-06T17:56:29Z"
python
"2023-11-07T02:52:33Z"
libs/langchain/langchain/embeddings/openai.py
openai_args = { "model": self.model, "request_timeout": self.request_timeout, "headers": self.headers, "api_key": self.openai_api_key, "organization": self.openai_organization, "api_base": self.openai_api_base, "api_type": self.openai_api_type, "api_version": self.openai_api_version, **self.model_kwargs, } if self.openai_api_type in ("azure", "azure_ad", "azuread"): openai_args["engine"] = self.deployment if self.openai_proxy: try: import openai except ImportError: raise ImportError( "Could not import openai python package. " "Please install it with `pip install openai`." ) openai.proxy = { "http": self.openai_proxy, "https": self.openai_proxy, } return openai_args def _get_len_safe_embeddings(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,943
OpenAIEmbeddings() does not work because of these bugs
### System Info Python Version: 3.11 LangChain Version: 0.0.331 OpenAI Version: 1.0.0 ### Who can help? @hwchase17, @agola11, @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The following error has been caused due to the recent change in version of OpenAI to 1.0.0 **Use OpenAI==0.28.1 to fix this error** With the code: `embeddings = OpenAIEmbeddings()` The error produced is: `AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?` I went through the `langchain/embeddings/openai.py` file and then changed `value["client"] = openai.Embedding` to `value["client"] = openai.embeddings`, but then I receive this new error: `AttributeError: module 'openai' has no attribute 'error'` in the same file (`langchain/embeddings/openai.py`) ### Expected behavior There should be no error when calling this function.
https://github.com/langchain-ai/langchain/issues/12943
https://github.com/langchain-ai/langchain/pull/12969
fdbb45d79e69485e0892dadf48b32dc8efadde9b
0c81cd923e04bb68fdf3ad299946d7fa85a21f9f
"2023-11-06T17:56:29Z"
python
"2023-11-07T02:52:33Z"
libs/langchain/langchain/embeddings/openai.py
self, texts: List[str], *, engine: str, chunk_size: Optional[int] = None ) -> List[List[float]]: embeddings: List[List[float]] = [[] for _ in range(len(texts))] try: import tiktoken except ImportError: raise ImportError( "Could not import tiktoken python package. " "This is needed in order to for OpenAIEmbeddings. " "Please install it with `pip install tiktoken`." ) tokens = []
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,943
OpenAIEmbeddings() does not work because of these bugs
### System Info Python Version: 3.11 LangChain Version: 0.0.331 OpenAI Version: 1.0.0 ### Who can help? @hwchase17, @agola11, @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The following error has been caused due to the recent change in version of OpenAI to 1.0.0 **Use OpenAI==0.28.1 to fix this error** With the code: `embeddings = OpenAIEmbeddings()` The error produced is: `AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?` I went through the `langchain/embeddings/openai.py` file and then changed `value["client"] = openai.Embedding` to `value["client"] = openai.embeddings`, but then I receive this new error: `AttributeError: module 'openai' has no attribute 'error'` in the same file (`langchain/embeddings/openai.py`) ### Expected behavior There should be no error when calling this function.
https://github.com/langchain-ai/langchain/issues/12943
https://github.com/langchain-ai/langchain/pull/12969
fdbb45d79e69485e0892dadf48b32dc8efadde9b
0c81cd923e04bb68fdf3ad299946d7fa85a21f9f
"2023-11-06T17:56:29Z"
python
"2023-11-07T02:52:33Z"
libs/langchain/langchain/embeddings/openai.py
indices = [] model_name = self.tiktoken_model_name or self.model try: encoding = tiktoken.encoding_for_model(model_name) except KeyError: logger.warning("Warning: model not found. Using cl100k_base encoding.") model = "cl100k_base" encoding = tiktoken.get_encoding(model) for i, text in enumerate(texts): if self.model.endswith("001"): text = text.replace("\n", " ") token = encoding.encode( text, allowed_special=self.allowed_special, disallowed_special=self.disallowed_special, ) for j in range(0, len(token), self.embedding_ctx_length): tokens.append(token[j : j + self.embedding_ctx_length]) indices.append(i) batched_embeddings: List[List[float]] = [] _chunk_size = chunk_size or self.chunk_size if self.show_progress_bar: try: from tqdm.auto import tqdm _iter = tqdm(range(0, len(tokens), _chunk_size)) except ImportError: _iter = range(0, len(tokens), _chunk_size) else:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,943
OpenAIEmbeddings() does not work because of these bugs
### System Info Python Version: 3.11 LangChain Version: 0.0.331 OpenAI Version: 1.0.0 ### Who can help? @hwchase17, @agola11, @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The following error has been caused due to the recent change in version of OpenAI to 1.0.0 **Use OpenAI==0.28.1 to fix this error** With the code: `embeddings = OpenAIEmbeddings()` The error produced is: `AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?` I went through the `langchain/embeddings/openai.py` file and then changed `value["client"] = openai.Embedding` to `value["client"] = openai.embeddings`, but then I receive this new error: `AttributeError: module 'openai' has no attribute 'error'` in the same file (`langchain/embeddings/openai.py`) ### Expected behavior There should be no error when calling this function.
https://github.com/langchain-ai/langchain/issues/12943
https://github.com/langchain-ai/langchain/pull/12969
fdbb45d79e69485e0892dadf48b32dc8efadde9b
0c81cd923e04bb68fdf3ad299946d7fa85a21f9f
"2023-11-06T17:56:29Z"
python
"2023-11-07T02:52:33Z"
libs/langchain/langchain/embeddings/openai.py
_iter = range(0, len(tokens), _chunk_size) for i in _iter: response = embed_with_retry( self, input=tokens[i : i + _chunk_size], **self._invocation_params, ) batched_embeddings.extend(r["embedding"] for r in response["data"]) results: List[List[List[float]]] = [[] for _ in range(len(texts))] num_tokens_in_batch: List[List[int]] = [[] for _ in range(len(texts))] for i in range(len(indices)): if self.skip_empty and len(batched_embeddings[i]) == 1: continue results[indices[i]].append(batched_embeddings[i]) num_tokens_in_batch[indices[i]].append(len(tokens[i])) for i in range(len(texts)): _result = results[i] if len(_result) == 0: average = embed_with_retry( self, input="", **self._invocation_params, )["data"][0]["embedding"] else: average = np.average(_result, axis=0, weights=num_tokens_in_batch[i]) embeddings[i] = (average / np.linalg.norm(average)).tolist() return embeddings async def _aget_len_safe_embeddings(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,943
OpenAIEmbeddings() does not work because of these bugs
### System Info Python Version: 3.11 LangChain Version: 0.0.331 OpenAI Version: 1.0.0 ### Who can help? @hwchase17, @agola11, @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The following error has been caused due to the recent change in version of OpenAI to 1.0.0 **Use OpenAI==0.28.1 to fix this error** With the code: `embeddings = OpenAIEmbeddings()` The error produced is: `AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?` I went through the `langchain/embeddings/openai.py` file and then changed `value["client"] = openai.Embedding` to `value["client"] = openai.embeddings`, but then I receive this new error: `AttributeError: module 'openai' has no attribute 'error'` in the same file (`langchain/embeddings/openai.py`) ### Expected behavior There should be no error when calling this function.
https://github.com/langchain-ai/langchain/issues/12943
https://github.com/langchain-ai/langchain/pull/12969
fdbb45d79e69485e0892dadf48b32dc8efadde9b
0c81cd923e04bb68fdf3ad299946d7fa85a21f9f
"2023-11-06T17:56:29Z"
python
"2023-11-07T02:52:33Z"
libs/langchain/langchain/embeddings/openai.py
self, texts: List[str], *, engine: str, chunk_size: Optional[int] = None ) -> List[List[float]]: embeddings: List[List[float]] = [[] for _ in range(len(texts))] try: import tiktoken except ImportError: raise ImportError( "Could not import tiktoken python package. " "This is needed in order to for OpenAIEmbeddings. " "Please install it with `pip install tiktoken`." ) tokens = [] indices = [] model_name = self.tiktoken_model_name or self.model try: encoding = tiktoken.encoding_for_model(model_name) except KeyError: logger.warning("Warning: model not found. Using cl100k_base encoding.") model = "cl100k_base" encoding = tiktoken.get_encoding(model) for i, text in enumerate(texts): if self.model.endswith("001"): text = text.replace("\n", " ") token = encoding.encode( text, allowed_special=self.allowed_special, disallowed_special=self.disallowed_special, )
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,943
OpenAIEmbeddings() does not work because of these bugs
### System Info Python Version: 3.11 LangChain Version: 0.0.331 OpenAI Version: 1.0.0 ### Who can help? @hwchase17, @agola11, @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The following error has been caused due to the recent change in version of OpenAI to 1.0.0 **Use OpenAI==0.28.1 to fix this error** With the code: `embeddings = OpenAIEmbeddings()` The error produced is: `AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?` I went through the `langchain/embeddings/openai.py` file and then changed `value["client"] = openai.Embedding` to `value["client"] = openai.embeddings`, but then I receive this new error: `AttributeError: module 'openai' has no attribute 'error'` in the same file (`langchain/embeddings/openai.py`) ### Expected behavior There should be no error when calling this function.
https://github.com/langchain-ai/langchain/issues/12943
https://github.com/langchain-ai/langchain/pull/12969
fdbb45d79e69485e0892dadf48b32dc8efadde9b
0c81cd923e04bb68fdf3ad299946d7fa85a21f9f
"2023-11-06T17:56:29Z"
python
"2023-11-07T02:52:33Z"
libs/langchain/langchain/embeddings/openai.py
for j in range(0, len(token), self.embedding_ctx_length): tokens.append(token[j : j + self.embedding_ctx_length]) indices.append(i) batched_embeddings: List[List[float]] = [] _chunk_size = chunk_size or self.chunk_size for i in range(0, len(tokens), _chunk_size): response = await async_embed_with_retry( self, input=tokens[i : i + _chunk_size], **self._invocation_params, ) batched_embeddings.extend(r["embedding"] for r in response["data"]) results: List[List[List[float]]] = [[] for _ in range(len(texts))] num_tokens_in_batch: List[List[int]] = [[] for _ in range(len(texts))] for i in range(len(indices)): results[indices[i]].append(batched_embeddings[i]) num_tokens_in_batch[indices[i]].append(len(tokens[i])) for i in range(len(texts)): _result = results[i] if len(_result) == 0: average = ( await async_embed_with_retry( self, input="", **self._invocation_params, ) )["data"][0]["embedding"] else: average = np.average(_result, axis=0, weights=num_tokens_in_batch[i]) embeddings[i] = (average / np.linalg.norm(average)).tolist()
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,943
OpenAIEmbeddings() does not work because of these bugs
### System Info Python Version: 3.11 LangChain Version: 0.0.331 OpenAI Version: 1.0.0 ### Who can help? @hwchase17, @agola11, @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The following error has been caused due to the recent change in version of OpenAI to 1.0.0 **Use OpenAI==0.28.1 to fix this error** With the code: `embeddings = OpenAIEmbeddings()` The error produced is: `AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?` I went through the `langchain/embeddings/openai.py` file and then changed `value["client"] = openai.Embedding` to `value["client"] = openai.embeddings`, but then I receive this new error: `AttributeError: module 'openai' has no attribute 'error'` in the same file (`langchain/embeddings/openai.py`) ### Expected behavior There should be no error when calling this function.
https://github.com/langchain-ai/langchain/issues/12943
https://github.com/langchain-ai/langchain/pull/12969
fdbb45d79e69485e0892dadf48b32dc8efadde9b
0c81cd923e04bb68fdf3ad299946d7fa85a21f9f
"2023-11-06T17:56:29Z"
python
"2023-11-07T02:52:33Z"
libs/langchain/langchain/embeddings/openai.py
return embeddings def embed_documents( self, texts: List[str], chunk_size: Optional[int] = 0 ) -> List[List[float]]: """Call out to OpenAI's embedding endpoint for embedding search docs. Args: texts: The list of texts to embed. chunk_size: The chunk size of embeddings. If None, will use the chunk size specified by the class. Returns: List of embeddings, one for each text. """ return self._get_len_safe_embeddings(texts, engine=self.deployment) async def aembed_documents( self, texts: List[str], chunk_size: Optional[int] = 0 ) -> List[List[float]]: """Call out to OpenAI's embedding endpoint async for embedding search docs. Args: texts: The list of texts to embed. chunk_size: The chunk size of embeddings. If None, will use the chunk size specified by the class. Returns: List of embeddings, one for each text. """ return await self._aget_len_safe_embeddings(texts, engine=self.deployment) def embed_query(self, text: str) -> List[float]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,943
OpenAIEmbeddings() does not work because of these bugs
### System Info Python Version: 3.11 LangChain Version: 0.0.331 OpenAI Version: 1.0.0 ### Who can help? @hwchase17, @agola11, @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The following error has been caused due to the recent change in version of OpenAI to 1.0.0 **Use OpenAI==0.28.1 to fix this error** With the code: `embeddings = OpenAIEmbeddings()` The error produced is: `AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?` I went through the `langchain/embeddings/openai.py` file and then changed `value["client"] = openai.Embedding` to `value["client"] = openai.embeddings`, but then I receive this new error: `AttributeError: module 'openai' has no attribute 'error'` in the same file (`langchain/embeddings/openai.py`) ### Expected behavior There should be no error when calling this function.
https://github.com/langchain-ai/langchain/issues/12943
https://github.com/langchain-ai/langchain/pull/12969
fdbb45d79e69485e0892dadf48b32dc8efadde9b
0c81cd923e04bb68fdf3ad299946d7fa85a21f9f
"2023-11-06T17:56:29Z"
python
"2023-11-07T02:52:33Z"
libs/langchain/langchain/embeddings/openai.py
"""Call out to OpenAI's embedding endpoint for embedding query text. Args: text: The text to embed. Returns: Embedding for the text. """ return self.embed_documents([text])[0] async def aembed_query(self, text: str) -> List[float]: """Call out to OpenAI's embedding endpoint async for embedding query text. Args: text: The text to embed. Returns: Embedding for the text. """ embeddings = await self.aembed_documents([text]) return embeddings[0]
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,943
OpenAIEmbeddings() does not work because of these bugs
### System Info Python Version: 3.11 LangChain Version: 0.0.331 OpenAI Version: 1.0.0 ### Who can help? @hwchase17, @agola11, @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The following error has been caused due to the recent change in version of OpenAI to 1.0.0 **Use OpenAI==0.28.1 to fix this error** With the code: `embeddings = OpenAIEmbeddings()` The error produced is: `AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?` I went through the `langchain/embeddings/openai.py` file and then changed `value["client"] = openai.Embedding` to `value["client"] = openai.embeddings`, but then I receive this new error: `AttributeError: module 'openai' has no attribute 'error'` in the same file (`langchain/embeddings/openai.py`) ### Expected behavior There should be no error when calling this function.
https://github.com/langchain-ai/langchain/issues/12943
https://github.com/langchain-ai/langchain/pull/12969
fdbb45d79e69485e0892dadf48b32dc8efadde9b
0c81cd923e04bb68fdf3ad299946d7fa85a21f9f
"2023-11-06T17:56:29Z"
python
"2023-11-07T02:52:33Z"
libs/langchain/tests/integration_tests/embeddings/test_openai.py
"""Test openai embeddings.""" import os import numpy as np import openai import pytest from langchain.embeddings.openai import OpenAIEmbeddings @pytest.mark.scheduled def test_openai_embedding_documents() -> None: """Test openai embeddings.""" documents = ["foo bar"] embedding = OpenAIEmbeddings() output = embedding.embed_documents(documents) assert len(output) == 1 assert len(output[0]) == 1536 @pytest.mark.scheduled def test_openai_embedding_documents_multiple() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
12,943
OpenAIEmbeddings() does not work because of these bugs
### System Info Python Version: 3.11 LangChain Version: 0.0.331 OpenAI Version: 1.0.0 ### Who can help? @hwchase17, @agola11, @eyurtsev ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [X] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [X] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction The following error has been caused due to the recent change in version of OpenAI to 1.0.0 **Use OpenAI==0.28.1 to fix this error** With the code: `embeddings = OpenAIEmbeddings()` The error produced is: `AttributeError: module 'openai' has no attribute 'Embedding'. Did you mean: 'embeddings'?` I went through the `langchain/embeddings/openai.py` file and then changed `value["client"] = openai.Embedding` to `value["client"] = openai.embeddings`, but then I receive this new error: `AttributeError: module 'openai' has no attribute 'error'` in the same file (`langchain/embeddings/openai.py`) ### Expected behavior There should be no error when calling this function.
https://github.com/langchain-ai/langchain/issues/12943
https://github.com/langchain-ai/langchain/pull/12969
fdbb45d79e69485e0892dadf48b32dc8efadde9b
0c81cd923e04bb68fdf3ad299946d7fa85a21f9f
"2023-11-06T17:56:29Z"
python
"2023-11-07T02:52:33Z"
libs/langchain/tests/integration_tests/embeddings/test_openai.py
"""Test openai embeddings.""" documents = ["foo bar", "bar foo", "foo"] embedding = OpenAIEmbeddings(chunk_size=2) embedding.embedding_ctx_length = 8191 output = embedding.embed_documents(documents) assert len(output) == 3 assert len(output[0]) == 1536 assert len(output[1]) == 1536 assert len(output[2]) == 1536 @pytest.mark.scheduled @pytest.mark.asyncio async def test_openai_embedding_documents_async_multiple() -> None: """Test openai embeddings.""" documents = ["foo bar", "bar foo", "foo"] embedding = OpenAIEmbeddings(chunk_size=2) embedding.embedding_ctx_length = 8191 output = await embedding.aembed_documents(documents) assert len(output) == 3 assert len(output[0]) == 1536 assert len(output[1]) == 1536 assert len(output[2]) == 1536 @pytest.mark.scheduled def test_openai_embedding_query() -> None: