status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
233
| body
stringlengths 0
186k
⌀ | issue_url
stringlengths 38
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown | updated_file
stringlengths 7
188
| chunk_content
stringlengths 1
1.03M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,325 | Power BI Dataset Agent Issue | ### System Info
We are using the below Power BI Agent guide to try to connect to Power BI dashboard.
[Power BI Dataset Agent](https://python.langchain.com/en/latest/modules/agents/toolkits/examples/powerbi.html)
We are able to connect to OpenAI API but facing issues with the below line of code.
`powerbi=PowerBIDataset(dataset_id="<dataset_id>", table_names=['table1', 'table2'], credential=DefaultAzureCredential())`
Error:
> ConfigError: field "credential" not yet prepared so type is still a ForwardRef, you might need to call PowerBIDataset.update_forward_refs().
We tried searching to solve the issues we no luck so far. Is there any configuration we are missing? Can you share more details, is there any specific configuration or access required on power BI side?
thanks in advance...
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Same steps mentioned your official PowerBI Dataset Agent documentation
### Expected behavior
We should be able to connect to power BI | https://github.com/langchain-ai/langchain/issues/4325 | https://github.com/langchain-ai/langchain/pull/4983 | e68dfa70625b6bf7cfeb4c8da77f68069fb9cb95 | 06e524416c18543d5fd4dcbebb9cdf4b56c47db4 | "2023-05-08T07:57:11Z" | python | "2023-05-19T15:25:52Z" | langchain/utilities/powerbi.py | json_contents: List[Dict[str, Union[str, int, float]]],
table_name: Optional[str] = None,
) -> str:
"""Converts a JSON object to a markdown table."""
output_md = ""
headers = json_contents[0].keys()
for header in headers:
header.replace("[", ".").replace("]", "")
if table_name:
header.replace(f"{table_name}.", "")
output_md += f"| {header} "
output_md += "|\n"
for row in json_contents:
for value in row.values():
output_md += f"| {value} "
output_md += "|\n"
return output_md |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,896 | Redis Vectorstore: Redis.from_texts_return_keys() got multiple values for argument 'cls' | ### System Info
```
Python 3.10.4
langchain==0.0.171
redis==3.5.3
redisearch==2.1.1
```
### Who can help?
@tylerhutcherson
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was able to override issue #3893 by temporarily disabling the ` _check_redis_module_exist`, post which I'm getting the below error when calling the `from_texts_return_keys` within the `from_documents` method in Redis class. Seems the argument `cls` is not needed in the `from_texts_return_keys` method, since it is already defined as a classmethod.
```
File "/workspaces/chatdataset_backend/adapters.py", line 96, in load
vectorstore = self.rds.from_documents(documents=documents, embedding=self.embeddings)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 296, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/redis.py", line 448, in from_texts
instance, _ = cls.from_texts_return_keys(
TypeError: Redis.from_texts_return_keys() got multiple values for argument 'cls'
```
### Expected behavior
Getting rid of cls argument from all the `Redis` class methods wherever required. Was able to solve the issue with this fix. | https://github.com/langchain-ai/langchain/issues/4896 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-18T02:46:53Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | """Wrapper around Redis vector database."""
from __future__ import annotations
import json
import logging
import uuid
from typing import (
TYPE_CHECKING,
Any,
Callable,
Dict,
Iterable,
List,
Literal,
Mapping,
Optional,
Tuple,
Type,
)
import numpy as np
from pydantic import BaseModel, root_validator
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,896 | Redis Vectorstore: Redis.from_texts_return_keys() got multiple values for argument 'cls' | ### System Info
```
Python 3.10.4
langchain==0.0.171
redis==3.5.3
redisearch==2.1.1
```
### Who can help?
@tylerhutcherson
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was able to override issue #3893 by temporarily disabling the ` _check_redis_module_exist`, post which I'm getting the below error when calling the `from_texts_return_keys` within the `from_documents` method in Redis class. Seems the argument `cls` is not needed in the `from_texts_return_keys` method, since it is already defined as a classmethod.
```
File "/workspaces/chatdataset_backend/adapters.py", line 96, in load
vectorstore = self.rds.from_documents(documents=documents, embedding=self.embeddings)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 296, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/redis.py", line 448, in from_texts
instance, _ = cls.from_texts_return_keys(
TypeError: Redis.from_texts_return_keys() got multiple values for argument 'cls'
```
### Expected behavior
Getting rid of cls argument from all the `Redis` class methods wherever required. Was able to solve the issue with this fix. | https://github.com/langchain-ai/langchain/issues/4896 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-18T02:46:53Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | from langchain.utils import get_from_dict_or_env
from langchain.vectorstores.base import VectorStore, VectorStoreRetriever
logger = logging.getLogger(__name__)
if TYPE_CHECKING:
from redis.client import Redis as RedisType
from redis.commands.search.query import Query
REDIS_REQUIRED_MODULES = [
{"name": "search", "ver": 20400},
{"name": "searchlight", "ver": 20400},
]
REDIS_DISTANCE_METRICS = Literal["COSINE", "IP", "L2"]
def _check_redis_module_exist(client: RedisType, required_modules: List[dict]) -> None:
"""Check if the correct Redis modules are installed."""
installed_modules = client.module_list()
installed_modules = {
module[b"name"].decode("utf-8"): module for module in installed_modules
}
for module in required_modules:
if module["name"] in installed_modules and int(
installed_modules[module["name"]][b"ver"]
) >= int(module["ver"]):
return
error_message = (
"You must add the RediSearch (>= 2.4) module from Redis Stack. "
"Please refer to Redis Stack docs: https://redis.io/docs/stack/"
)
logging.error(error_message)
raise ValueError(error_message)
def _check_index_exists(client: RedisType, index_name: str) -> bool: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,896 | Redis Vectorstore: Redis.from_texts_return_keys() got multiple values for argument 'cls' | ### System Info
```
Python 3.10.4
langchain==0.0.171
redis==3.5.3
redisearch==2.1.1
```
### Who can help?
@tylerhutcherson
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was able to override issue #3893 by temporarily disabling the ` _check_redis_module_exist`, post which I'm getting the below error when calling the `from_texts_return_keys` within the `from_documents` method in Redis class. Seems the argument `cls` is not needed in the `from_texts_return_keys` method, since it is already defined as a classmethod.
```
File "/workspaces/chatdataset_backend/adapters.py", line 96, in load
vectorstore = self.rds.from_documents(documents=documents, embedding=self.embeddings)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 296, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/redis.py", line 448, in from_texts
instance, _ = cls.from_texts_return_keys(
TypeError: Redis.from_texts_return_keys() got multiple values for argument 'cls'
```
### Expected behavior
Getting rid of cls argument from all the `Redis` class methods wherever required. Was able to solve the issue with this fix. | https://github.com/langchain-ai/langchain/issues/4896 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-18T02:46:53Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | """Check if Redis index exists."""
try:
client.ft(index_name).info()
except:
logger.info("Index does not exist")
return False
logger.info("Index already exists")
return True
def _redis_key(prefix: str) -> str:
"""Redis key schema for a given prefix."""
return f"{prefix}:{uuid.uuid4().hex}"
def _redis_prefix(index_name: str) -> str:
"""Redis key prefix for a given index."""
return f"doc:{index_name}"
def _default_relevance_score(val: float) -> float:
return 1 - val
class Redis(VectorStore): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,896 | Redis Vectorstore: Redis.from_texts_return_keys() got multiple values for argument 'cls' | ### System Info
```
Python 3.10.4
langchain==0.0.171
redis==3.5.3
redisearch==2.1.1
```
### Who can help?
@tylerhutcherson
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was able to override issue #3893 by temporarily disabling the ` _check_redis_module_exist`, post which I'm getting the below error when calling the `from_texts_return_keys` within the `from_documents` method in Redis class. Seems the argument `cls` is not needed in the `from_texts_return_keys` method, since it is already defined as a classmethod.
```
File "/workspaces/chatdataset_backend/adapters.py", line 96, in load
vectorstore = self.rds.from_documents(documents=documents, embedding=self.embeddings)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 296, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/redis.py", line 448, in from_texts
instance, _ = cls.from_texts_return_keys(
TypeError: Redis.from_texts_return_keys() got multiple values for argument 'cls'
```
### Expected behavior
Getting rid of cls argument from all the `Redis` class methods wherever required. Was able to solve the issue with this fix. | https://github.com/langchain-ai/langchain/issues/4896 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-18T02:46:53Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | """Wrapper around Redis vector database.
To use, you should have the ``redis`` python package installed.
Example:
.. code-block:: python
from langchain.vectorstores import Redis
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = Redis(
redis_url="redis://username:password@localhost:6379"
index_name="my-index",
embedding_function=embeddings.embed_query,
)
"""
def __init__(
self,
redis_url: str,
index_name: str,
embedding_function: Callable,
content_key: str = "content",
metadata_key: str = "metadata",
vector_key: str = "content_vector",
relevance_score_fn: Optional[ |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,896 | Redis Vectorstore: Redis.from_texts_return_keys() got multiple values for argument 'cls' | ### System Info
```
Python 3.10.4
langchain==0.0.171
redis==3.5.3
redisearch==2.1.1
```
### Who can help?
@tylerhutcherson
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was able to override issue #3893 by temporarily disabling the ` _check_redis_module_exist`, post which I'm getting the below error when calling the `from_texts_return_keys` within the `from_documents` method in Redis class. Seems the argument `cls` is not needed in the `from_texts_return_keys` method, since it is already defined as a classmethod.
```
File "/workspaces/chatdataset_backend/adapters.py", line 96, in load
vectorstore = self.rds.from_documents(documents=documents, embedding=self.embeddings)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 296, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/redis.py", line 448, in from_texts
instance, _ = cls.from_texts_return_keys(
TypeError: Redis.from_texts_return_keys() got multiple values for argument 'cls'
```
### Expected behavior
Getting rid of cls argument from all the `Redis` class methods wherever required. Was able to solve the issue with this fix. | https://github.com/langchain-ai/langchain/issues/4896 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-18T02:46:53Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | Callable[[float], float]
] = _default_relevance_score,
**kwargs: Any,
):
"""Initialize with necessary components."""
try:
import redis
except ImportError:
raise ValueError(
"Could not import redis python package. "
"Please install it with `pip install redis`."
)
self.embedding_function = embedding_function
self.index_name = index_name
try:
redis_client = redis.from_url(redis_url, **kwargs)
_check_redis_module_exist(redis_client, REDIS_REQUIRED_MODULES)
except ValueError as e:
raise ValueError(f"Redis failed to connect: {e}")
self.client = redis_client
self.content_key = content_key
self.metadata_key = metadata_key
self.vector_key = vector_key
self.relevance_score_fn = relevance_score_fn
def _create_index(
self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE"
) -> None:
try: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,896 | Redis Vectorstore: Redis.from_texts_return_keys() got multiple values for argument 'cls' | ### System Info
```
Python 3.10.4
langchain==0.0.171
redis==3.5.3
redisearch==2.1.1
```
### Who can help?
@tylerhutcherson
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was able to override issue #3893 by temporarily disabling the ` _check_redis_module_exist`, post which I'm getting the below error when calling the `from_texts_return_keys` within the `from_documents` method in Redis class. Seems the argument `cls` is not needed in the `from_texts_return_keys` method, since it is already defined as a classmethod.
```
File "/workspaces/chatdataset_backend/adapters.py", line 96, in load
vectorstore = self.rds.from_documents(documents=documents, embedding=self.embeddings)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 296, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/redis.py", line 448, in from_texts
instance, _ = cls.from_texts_return_keys(
TypeError: Redis.from_texts_return_keys() got multiple values for argument 'cls'
```
### Expected behavior
Getting rid of cls argument from all the `Redis` class methods wherever required. Was able to solve the issue with this fix. | https://github.com/langchain-ai/langchain/issues/4896 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-18T02:46:53Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | from redis.commands.search.field import TextField, VectorField
from redis.commands.search.indexDefinition import IndexDefinition, IndexType
except ImportError:
raise ValueError(
"Could not import redis python package. "
"Please install it with `pip install redis`."
)
if not _check_index_exists(self.client, self.index_name):
schema = (
TextField(name=self.content_key),
TextField(name=self.metadata_key),
VectorField(
self.vector_key,
"FLAT",
{
"TYPE": "FLOAT32",
"DIM": dim,
"DISTANCE_METRIC": distance_metric,
},
),
)
prefix = _redis_prefix(self.index_name)
self.client.ft(self.index_name).create_index(
fields=schema,
definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH),
)
def add_texts( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,896 | Redis Vectorstore: Redis.from_texts_return_keys() got multiple values for argument 'cls' | ### System Info
```
Python 3.10.4
langchain==0.0.171
redis==3.5.3
redisearch==2.1.1
```
### Who can help?
@tylerhutcherson
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was able to override issue #3893 by temporarily disabling the ` _check_redis_module_exist`, post which I'm getting the below error when calling the `from_texts_return_keys` within the `from_documents` method in Redis class. Seems the argument `cls` is not needed in the `from_texts_return_keys` method, since it is already defined as a classmethod.
```
File "/workspaces/chatdataset_backend/adapters.py", line 96, in load
vectorstore = self.rds.from_documents(documents=documents, embedding=self.embeddings)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 296, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/redis.py", line 448, in from_texts
instance, _ = cls.from_texts_return_keys(
TypeError: Redis.from_texts_return_keys() got multiple values for argument 'cls'
```
### Expected behavior
Getting rid of cls argument from all the `Redis` class methods wherever required. Was able to solve the issue with this fix. | https://github.com/langchain-ai/langchain/issues/4896 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-18T02:46:53Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
embeddings: Optional[List[List[float]]] = None,
keys: Optional[List[str]] = None,
batch_size: int = 1000,
**kwargs: Any,
) -> List[str]:
"""Add more texts to the vectorstore.
Args:
texts (Iterable[str]): Iterable of strings/text to add to the vectorstore.
metadatas (Optional[List[dict]], optional): Optional list of metadatas.
Defaults to None.
embeddings (Optional[List[List[float]]], optional): Optional pre-generated
embeddings. Defaults to None.
keys (Optional[List[str]], optional): Optional key values to use as ids.
Defaults to None.
batch_size (int, optional): Batch size to use for writes. Defaults to 1000.
Returns:
List[str]: List of ids added to the vectorstore
"""
ids = []
prefix = _redis_prefix(self.index_name)
pipeline = self.client.pipeline(transaction=False)
for i, text in enumerate(texts):
key = keys[i] if keys else _redis_key(prefix)
metadata = metadatas[i] if metadatas else {} |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,896 | Redis Vectorstore: Redis.from_texts_return_keys() got multiple values for argument 'cls' | ### System Info
```
Python 3.10.4
langchain==0.0.171
redis==3.5.3
redisearch==2.1.1
```
### Who can help?
@tylerhutcherson
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was able to override issue #3893 by temporarily disabling the ` _check_redis_module_exist`, post which I'm getting the below error when calling the `from_texts_return_keys` within the `from_documents` method in Redis class. Seems the argument `cls` is not needed in the `from_texts_return_keys` method, since it is already defined as a classmethod.
```
File "/workspaces/chatdataset_backend/adapters.py", line 96, in load
vectorstore = self.rds.from_documents(documents=documents, embedding=self.embeddings)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 296, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/redis.py", line 448, in from_texts
instance, _ = cls.from_texts_return_keys(
TypeError: Redis.from_texts_return_keys() got multiple values for argument 'cls'
```
### Expected behavior
Getting rid of cls argument from all the `Redis` class methods wherever required. Was able to solve the issue with this fix. | https://github.com/langchain-ai/langchain/issues/4896 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-18T02:46:53Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | embedding = embeddings[i] if embeddings else self.embedding_function(text)
pipeline.hset(
key,
mapping={
self.content_key: text,
self.vector_key: np.array(embedding, dtype=np.float32).tobytes(),
self.metadata_key: json.dumps(metadata),
},
)
ids.append(key)
if i % batch_size == 0:
pipeline.execute()
pipeline.execute()
return ids
def similarity_search(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Document]:
"""
Returns the most similar indexed documents to the query text.
Args:
query (str): The query text for which to find similar documents.
k (int): The number of documents to return. Default is 4.
Returns:
List[Document]: A list of documents that are most similar to the query text.
"""
docs_and_scores = self.similarity_search_with_score(query, k=k)
return [doc for doc, _ in docs_and_scores]
def similarity_search_limit_score( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,896 | Redis Vectorstore: Redis.from_texts_return_keys() got multiple values for argument 'cls' | ### System Info
```
Python 3.10.4
langchain==0.0.171
redis==3.5.3
redisearch==2.1.1
```
### Who can help?
@tylerhutcherson
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was able to override issue #3893 by temporarily disabling the ` _check_redis_module_exist`, post which I'm getting the below error when calling the `from_texts_return_keys` within the `from_documents` method in Redis class. Seems the argument `cls` is not needed in the `from_texts_return_keys` method, since it is already defined as a classmethod.
```
File "/workspaces/chatdataset_backend/adapters.py", line 96, in load
vectorstore = self.rds.from_documents(documents=documents, embedding=self.embeddings)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 296, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/redis.py", line 448, in from_texts
instance, _ = cls.from_texts_return_keys(
TypeError: Redis.from_texts_return_keys() got multiple values for argument 'cls'
```
### Expected behavior
Getting rid of cls argument from all the `Redis` class methods wherever required. Was able to solve the issue with this fix. | https://github.com/langchain-ai/langchain/issues/4896 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-18T02:46:53Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | self, query: str, k: int = 4, score_threshold: float = 0.2, **kwargs: Any
) -> List[Document]:
"""
Returns the most similar indexed documents to the query text within the
score_threshold range.
Args:
query (str): The query text for which to find similar documents.
k (int): The number of documents to return. Default is 4.
score_threshold (float): The minimum matching score required for a document
to be considered a match. Defaults to 0.2.
Because the similarity calculation algorithm is based on cosine similarity,
the smaller the angle, the higher the similarity.
Returns:
List[Document]: A list of documents that are most similar to the query text,
including the match score for each document.
Note:
If there are no documents that satisfy the score_threshold value,
an empty list is returned.
"""
docs_and_scores = self.similarity_search_with_score(query, k=k)
return [doc for doc, score in docs_and_scores if score < score_threshold]
def _prepare_query(self, k: int) -> Query: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,896 | Redis Vectorstore: Redis.from_texts_return_keys() got multiple values for argument 'cls' | ### System Info
```
Python 3.10.4
langchain==0.0.171
redis==3.5.3
redisearch==2.1.1
```
### Who can help?
@tylerhutcherson
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was able to override issue #3893 by temporarily disabling the ` _check_redis_module_exist`, post which I'm getting the below error when calling the `from_texts_return_keys` within the `from_documents` method in Redis class. Seems the argument `cls` is not needed in the `from_texts_return_keys` method, since it is already defined as a classmethod.
```
File "/workspaces/chatdataset_backend/adapters.py", line 96, in load
vectorstore = self.rds.from_documents(documents=documents, embedding=self.embeddings)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 296, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/redis.py", line 448, in from_texts
instance, _ = cls.from_texts_return_keys(
TypeError: Redis.from_texts_return_keys() got multiple values for argument 'cls'
```
### Expected behavior
Getting rid of cls argument from all the `Redis` class methods wherever required. Was able to solve the issue with this fix. | https://github.com/langchain-ai/langchain/issues/4896 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-18T02:46:53Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | try:
from redis.commands.search.query import Query
except ImportError:
raise ValueError(
"Could not import redis python package. "
"Please install it with `pip install redis`."
)
hybrid_fields = "*"
base_query = (
f"{hybrid_fields}=>[KNN {k} @{self.vector_key} $vector AS vector_score]"
)
return_fields = [self.metadata_key, self.content_key, "vector_score"]
return (
Query(base_query)
.return_fields(*return_fields)
.sort_by("vector_score")
.paging(0, k)
.dialect(2)
)
def similarity_search_with_score( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,896 | Redis Vectorstore: Redis.from_texts_return_keys() got multiple values for argument 'cls' | ### System Info
```
Python 3.10.4
langchain==0.0.171
redis==3.5.3
redisearch==2.1.1
```
### Who can help?
@tylerhutcherson
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was able to override issue #3893 by temporarily disabling the ` _check_redis_module_exist`, post which I'm getting the below error when calling the `from_texts_return_keys` within the `from_documents` method in Redis class. Seems the argument `cls` is not needed in the `from_texts_return_keys` method, since it is already defined as a classmethod.
```
File "/workspaces/chatdataset_backend/adapters.py", line 96, in load
vectorstore = self.rds.from_documents(documents=documents, embedding=self.embeddings)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 296, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/redis.py", line 448, in from_texts
instance, _ = cls.from_texts_return_keys(
TypeError: Redis.from_texts_return_keys() got multiple values for argument 'cls'
```
### Expected behavior
Getting rid of cls argument from all the `Redis` class methods wherever required. Was able to solve the issue with this fix. | https://github.com/langchain-ai/langchain/issues/4896 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-18T02:46:53Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | self, query: str, k: int = 4
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents most similar to the query and score for each
"""
embedding = self.embedding_function(query)
redis_query = self._prepare_query(k)
params_dict: Mapping[str, str] = {
"vector": np.array(embedding)
.astype(dtype=np.float32)
.tobytes()
} |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,896 | Redis Vectorstore: Redis.from_texts_return_keys() got multiple values for argument 'cls' | ### System Info
```
Python 3.10.4
langchain==0.0.171
redis==3.5.3
redisearch==2.1.1
```
### Who can help?
@tylerhutcherson
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was able to override issue #3893 by temporarily disabling the ` _check_redis_module_exist`, post which I'm getting the below error when calling the `from_texts_return_keys` within the `from_documents` method in Redis class. Seems the argument `cls` is not needed in the `from_texts_return_keys` method, since it is already defined as a classmethod.
```
File "/workspaces/chatdataset_backend/adapters.py", line 96, in load
vectorstore = self.rds.from_documents(documents=documents, embedding=self.embeddings)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 296, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/redis.py", line 448, in from_texts
instance, _ = cls.from_texts_return_keys(
TypeError: Redis.from_texts_return_keys() got multiple values for argument 'cls'
```
### Expected behavior
Getting rid of cls argument from all the `Redis` class methods wherever required. Was able to solve the issue with this fix. | https://github.com/langchain-ai/langchain/issues/4896 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-18T02:46:53Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | results = self.client.ft(self.index_name).search(redis_query, params_dict)
docs = [
(
Document(
page_content=result.content, metadata=json.loads(result.metadata)
),
float(result.vector_score),
)
for result in results.docs
]
return docs
def _similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs and relevance scores, normalized on a scale from 0 to 1.
0 is dissimilar, 1 is most similar.
"""
if self.relevance_score_fn is None:
raise ValueError(
"relevance_score_fn must be provided to"
" Redis constructor to normalize scores"
)
docs_and_scores = self.similarity_search_with_score(query, k=k)
return [(doc, self.relevance_score_fn(score)) for doc, score in docs_and_scores]
@classmethod
def from_texts_return_keys( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,896 | Redis Vectorstore: Redis.from_texts_return_keys() got multiple values for argument 'cls' | ### System Info
```
Python 3.10.4
langchain==0.0.171
redis==3.5.3
redisearch==2.1.1
```
### Who can help?
@tylerhutcherson
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was able to override issue #3893 by temporarily disabling the ` _check_redis_module_exist`, post which I'm getting the below error when calling the `from_texts_return_keys` within the `from_documents` method in Redis class. Seems the argument `cls` is not needed in the `from_texts_return_keys` method, since it is already defined as a classmethod.
```
File "/workspaces/chatdataset_backend/adapters.py", line 96, in load
vectorstore = self.rds.from_documents(documents=documents, embedding=self.embeddings)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 296, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/redis.py", line 448, in from_texts
instance, _ = cls.from_texts_return_keys(
TypeError: Redis.from_texts_return_keys() got multiple values for argument 'cls'
```
### Expected behavior
Getting rid of cls argument from all the `Redis` class methods wherever required. Was able to solve the issue with this fix. | https://github.com/langchain-ai/langchain/issues/4896 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-18T02:46:53Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | cls: Type[Redis],
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
index_name: Optional[str] = None,
content_key: str = "content",
metadata_key: str = "metadata",
vector_key: str = "content_vector",
distance_metric: REDIS_DISTANCE_METRICS = "COSINE",
**kwargs: Any,
) -> Tuple[Redis, List[str]]:
"""Create a Redis vectorstore from raw documents.
This is a user-friendly interface that:
1. Embeds documents.
2. Creates a new index for the embeddings in Redis.
3. Adds the documents to the newly created Redis index.
This is intended to be a quick way to get started.
Example:
.. code-block:: python
from langchain.vectorstores import Redis
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
redisearch = RediSearch.from_texts( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,896 | Redis Vectorstore: Redis.from_texts_return_keys() got multiple values for argument 'cls' | ### System Info
```
Python 3.10.4
langchain==0.0.171
redis==3.5.3
redisearch==2.1.1
```
### Who can help?
@tylerhutcherson
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was able to override issue #3893 by temporarily disabling the ` _check_redis_module_exist`, post which I'm getting the below error when calling the `from_texts_return_keys` within the `from_documents` method in Redis class. Seems the argument `cls` is not needed in the `from_texts_return_keys` method, since it is already defined as a classmethod.
```
File "/workspaces/chatdataset_backend/adapters.py", line 96, in load
vectorstore = self.rds.from_documents(documents=documents, embedding=self.embeddings)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 296, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/redis.py", line 448, in from_texts
instance, _ = cls.from_texts_return_keys(
TypeError: Redis.from_texts_return_keys() got multiple values for argument 'cls'
```
### Expected behavior
Getting rid of cls argument from all the `Redis` class methods wherever required. Was able to solve the issue with this fix. | https://github.com/langchain-ai/langchain/issues/4896 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-18T02:46:53Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | texts,
embeddings,
redis_url="redis://username:password@localhost:6379"
)
"""
redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL")
if "redis_url" in kwargs:
kwargs.pop("redis_url")
if not index_name:
index_name = uuid.uuid4().hex
instance = cls(
redis_url,
index_name,
embedding.embed_query,
content_key=content_key,
metadata_key=metadata_key,
vector_key=vector_key,
**kwargs,
)
embeddings = embedding.embed_documents(texts)
instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric)
keys = instance.add_texts(texts, metadatas, embeddings)
return instance, keys
@classmethod
def from_texts( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,896 | Redis Vectorstore: Redis.from_texts_return_keys() got multiple values for argument 'cls' | ### System Info
```
Python 3.10.4
langchain==0.0.171
redis==3.5.3
redisearch==2.1.1
```
### Who can help?
@tylerhutcherson
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was able to override issue #3893 by temporarily disabling the ` _check_redis_module_exist`, post which I'm getting the below error when calling the `from_texts_return_keys` within the `from_documents` method in Redis class. Seems the argument `cls` is not needed in the `from_texts_return_keys` method, since it is already defined as a classmethod.
```
File "/workspaces/chatdataset_backend/adapters.py", line 96, in load
vectorstore = self.rds.from_documents(documents=documents, embedding=self.embeddings)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 296, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/redis.py", line 448, in from_texts
instance, _ = cls.from_texts_return_keys(
TypeError: Redis.from_texts_return_keys() got multiple values for argument 'cls'
```
### Expected behavior
Getting rid of cls argument from all the `Redis` class methods wherever required. Was able to solve the issue with this fix. | https://github.com/langchain-ai/langchain/issues/4896 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-18T02:46:53Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | cls: Type[Redis],
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
index_name: Optional[str] = None,
content_key: str = "content",
metadata_key: str = "metadata",
vector_key: str = "content_vector",
**kwargs: Any,
) -> Redis: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,896 | Redis Vectorstore: Redis.from_texts_return_keys() got multiple values for argument 'cls' | ### System Info
```
Python 3.10.4
langchain==0.0.171
redis==3.5.3
redisearch==2.1.1
```
### Who can help?
@tylerhutcherson
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was able to override issue #3893 by temporarily disabling the ` _check_redis_module_exist`, post which I'm getting the below error when calling the `from_texts_return_keys` within the `from_documents` method in Redis class. Seems the argument `cls` is not needed in the `from_texts_return_keys` method, since it is already defined as a classmethod.
```
File "/workspaces/chatdataset_backend/adapters.py", line 96, in load
vectorstore = self.rds.from_documents(documents=documents, embedding=self.embeddings)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 296, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/redis.py", line 448, in from_texts
instance, _ = cls.from_texts_return_keys(
TypeError: Redis.from_texts_return_keys() got multiple values for argument 'cls'
```
### Expected behavior
Getting rid of cls argument from all the `Redis` class methods wherever required. Was able to solve the issue with this fix. | https://github.com/langchain-ai/langchain/issues/4896 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-18T02:46:53Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | """Create a Redis vectorstore from raw documents.
This is a user-friendly interface that:
1. Embeds documents.
2. Creates a new index for the embeddings in Redis.
3. Adds the documents to the newly created Redis index.
This is intended to be a quick way to get started.
Example:
.. code-block:: python
from langchain.vectorstores import Redis
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
redisearch = RediSearch.from_texts(
texts,
embeddings,
redis_url="redis://username:password@localhost:6379"
)
"""
instance, _ = cls.from_texts_return_keys(
texts,
embedding,
metadatas=metadatas,
index_name=index_name,
content_key=content_key,
metadata_key=metadata_key,
vector_key=vector_key,
**kwargs,
)
return instance
@staticmethod
def drop_index( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,896 | Redis Vectorstore: Redis.from_texts_return_keys() got multiple values for argument 'cls' | ### System Info
```
Python 3.10.4
langchain==0.0.171
redis==3.5.3
redisearch==2.1.1
```
### Who can help?
@tylerhutcherson
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was able to override issue #3893 by temporarily disabling the ` _check_redis_module_exist`, post which I'm getting the below error when calling the `from_texts_return_keys` within the `from_documents` method in Redis class. Seems the argument `cls` is not needed in the `from_texts_return_keys` method, since it is already defined as a classmethod.
```
File "/workspaces/chatdataset_backend/adapters.py", line 96, in load
vectorstore = self.rds.from_documents(documents=documents, embedding=self.embeddings)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 296, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/redis.py", line 448, in from_texts
instance, _ = cls.from_texts_return_keys(
TypeError: Redis.from_texts_return_keys() got multiple values for argument 'cls'
```
### Expected behavior
Getting rid of cls argument from all the `Redis` class methods wherever required. Was able to solve the issue with this fix. | https://github.com/langchain-ai/langchain/issues/4896 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-18T02:46:53Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | index_name: str,
delete_documents: bool,
**kwargs: Any,
) -> bool:
"""
Drop a Redis search index.
Args:
index_name (str): Name of the index to drop.
delete_documents (bool): Whether to drop the associated documents.
Returns:
bool: Whether or not the drop was successful.
"""
redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL")
try:
import redis
except ImportError:
raise ValueError(
"Could not import redis python package. "
"Please install it with `pip install redis`."
)
try:
if "redis_url" in kwargs:
kwargs.pop("redis_url")
client = redis.from_url(url=redis_url, **kwargs)
except ValueError as e:
raise ValueError(f"Your redis connected error: {e}")
try: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,896 | Redis Vectorstore: Redis.from_texts_return_keys() got multiple values for argument 'cls' | ### System Info
```
Python 3.10.4
langchain==0.0.171
redis==3.5.3
redisearch==2.1.1
```
### Who can help?
@tylerhutcherson
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was able to override issue #3893 by temporarily disabling the ` _check_redis_module_exist`, post which I'm getting the below error when calling the `from_texts_return_keys` within the `from_documents` method in Redis class. Seems the argument `cls` is not needed in the `from_texts_return_keys` method, since it is already defined as a classmethod.
```
File "/workspaces/chatdataset_backend/adapters.py", line 96, in load
vectorstore = self.rds.from_documents(documents=documents, embedding=self.embeddings)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 296, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/redis.py", line 448, in from_texts
instance, _ = cls.from_texts_return_keys(
TypeError: Redis.from_texts_return_keys() got multiple values for argument 'cls'
```
### Expected behavior
Getting rid of cls argument from all the `Redis` class methods wherever required. Was able to solve the issue with this fix. | https://github.com/langchain-ai/langchain/issues/4896 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-18T02:46:53Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | client.ft(index_name).dropindex(delete_documents)
logger.info("Drop index")
return True
except:
return False
@classmethod
def from_existing_index(
cls,
embedding: Embeddings,
index_name: str,
content_key: str = "content",
metadata_key: str = "metadata",
vector_key: str = "content_vector",
**kwargs: Any,
) -> Redis:
"""Connect to an existing Redis index."""
redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL")
try:
import redis
except ImportError:
raise ValueError(
"Could not import redis python package. "
"Please install it with `pip install redis`."
)
try:
if "redis_url" in kwargs:
kwargs.pop("redis_url") |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,896 | Redis Vectorstore: Redis.from_texts_return_keys() got multiple values for argument 'cls' | ### System Info
```
Python 3.10.4
langchain==0.0.171
redis==3.5.3
redisearch==2.1.1
```
### Who can help?
@tylerhutcherson
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was able to override issue #3893 by temporarily disabling the ` _check_redis_module_exist`, post which I'm getting the below error when calling the `from_texts_return_keys` within the `from_documents` method in Redis class. Seems the argument `cls` is not needed in the `from_texts_return_keys` method, since it is already defined as a classmethod.
```
File "/workspaces/chatdataset_backend/adapters.py", line 96, in load
vectorstore = self.rds.from_documents(documents=documents, embedding=self.embeddings)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 296, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/redis.py", line 448, in from_texts
instance, _ = cls.from_texts_return_keys(
TypeError: Redis.from_texts_return_keys() got multiple values for argument 'cls'
```
### Expected behavior
Getting rid of cls argument from all the `Redis` class methods wherever required. Was able to solve the issue with this fix. | https://github.com/langchain-ai/langchain/issues/4896 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-18T02:46:53Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | client = redis.from_url(url=redis_url, **kwargs)
_check_redis_module_exist(client, REDIS_REQUIRED_MODULES)
assert _check_index_exists(
client, index_name
), f"Index {index_name} does not exist"
except Exception as e:
raise ValueError(f"Redis failed to connect: {e}")
return cls(
redis_url,
index_name,
embedding.embed_query,
content_key=content_key,
metadata_key=metadata_key,
vector_key=vector_key,
**kwargs,
)
def as_retriever(self, **kwargs: Any) -> RedisVectorStoreRetriever:
return RedisVectorStoreRetriever(vectorstore=self, **kwargs)
class RedisVectorStoreRetriever(VectorStoreRetriever, BaseModel):
vectorstore: Redis
search_type: str = "similarity"
k: int = 4
score_threshold: float = 0.4
class Config:
"""Configuration for this pydantic object."""
arbitrary_types_allowed = True
@root_validator()
def validate_search_type(cls, values: Dict) -> Dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,896 | Redis Vectorstore: Redis.from_texts_return_keys() got multiple values for argument 'cls' | ### System Info
```
Python 3.10.4
langchain==0.0.171
redis==3.5.3
redisearch==2.1.1
```
### Who can help?
@tylerhutcherson
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was able to override issue #3893 by temporarily disabling the ` _check_redis_module_exist`, post which I'm getting the below error when calling the `from_texts_return_keys` within the `from_documents` method in Redis class. Seems the argument `cls` is not needed in the `from_texts_return_keys` method, since it is already defined as a classmethod.
```
File "/workspaces/chatdataset_backend/adapters.py", line 96, in load
vectorstore = self.rds.from_documents(documents=documents, embedding=self.embeddings)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 296, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/redis.py", line 448, in from_texts
instance, _ = cls.from_texts_return_keys(
TypeError: Redis.from_texts_return_keys() got multiple values for argument 'cls'
```
### Expected behavior
Getting rid of cls argument from all the `Redis` class methods wherever required. Was able to solve the issue with this fix. | https://github.com/langchain-ai/langchain/issues/4896 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-18T02:46:53Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | """Validate search type."""
if "search_type" in values:
search_type = values["search_type"]
if search_type not in ("similarity", "similarity_limit"):
raise ValueError(f"search_type of {search_type} not allowed.")
return values
def get_relevant_documents(self, query: str) -> List[Document]:
if self.search_type == "similarity":
docs = self.vectorstore.similarity_search(query, k=self.k)
elif self.search_type == "similarity_limit":
docs = self.vectorstore.similarity_search_limit_score(
query, k=self.k, score_threshold=self.score_threshold
)
else:
raise ValueError(f"search_type of {self.search_type} not allowed.")
return docs
async def aget_relevant_documents(self, query: str) -> List[Document]:
raise NotImplementedError("RedisVectorStoreRetriever does not support async")
def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]:
"""Add documents to vectorstore."""
return self.vectorstore.add_documents(documents, **kwargs)
async def aadd_documents(
self, documents: List[Document], **kwargs: Any
) -> List[str]:
"""Add documents to vectorstore."""
return await self.vectorstore.aadd_documents(documents, **kwargs) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,893 | AttributeError: 'Redis' object has no attribute 'module_list' | I'm trying to make a vectorstore using redis and store the embeddings in redis.
When I write the code
rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='test_link')
I get the following error
AttributeError: 'Redis' object has no attribute 'module_list'.
Note: I'm trying to run redis locally on windows subsystem ubuntu.
Please help. | https://github.com/langchain-ai/langchain/issues/3893 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-01T17:02:43Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | """Wrapper around Redis vector database."""
from __future__ import annotations
import json
import logging
import uuid
from typing import (
TYPE_CHECKING,
Any,
Callable,
Dict,
Iterable,
List,
Literal,
Mapping,
Optional,
Tuple,
Type,
)
import numpy as np
from pydantic import BaseModel, root_validator
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,893 | AttributeError: 'Redis' object has no attribute 'module_list' | I'm trying to make a vectorstore using redis and store the embeddings in redis.
When I write the code
rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='test_link')
I get the following error
AttributeError: 'Redis' object has no attribute 'module_list'.
Note: I'm trying to run redis locally on windows subsystem ubuntu.
Please help. | https://github.com/langchain-ai/langchain/issues/3893 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-01T17:02:43Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | from langchain.utils import get_from_dict_or_env
from langchain.vectorstores.base import VectorStore, VectorStoreRetriever
logger = logging.getLogger(__name__)
if TYPE_CHECKING:
from redis.client import Redis as RedisType
from redis.commands.search.query import Query
REDIS_REQUIRED_MODULES = [
{"name": "search", "ver": 20400},
{"name": "searchlight", "ver": 20400},
]
REDIS_DISTANCE_METRICS = Literal["COSINE", "IP", "L2"]
def _check_redis_module_exist(client: RedisType, required_modules: List[dict]) -> None:
"""Check if the correct Redis modules are installed."""
installed_modules = client.module_list()
installed_modules = {
module[b"name"].decode("utf-8"): module for module in installed_modules
}
for module in required_modules:
if module["name"] in installed_modules and int(
installed_modules[module["name"]][b"ver"]
) >= int(module["ver"]):
return
error_message = (
"You must add the RediSearch (>= 2.4) module from Redis Stack. "
"Please refer to Redis Stack docs: https://redis.io/docs/stack/"
)
logging.error(error_message)
raise ValueError(error_message)
def _check_index_exists(client: RedisType, index_name: str) -> bool: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,893 | AttributeError: 'Redis' object has no attribute 'module_list' | I'm trying to make a vectorstore using redis and store the embeddings in redis.
When I write the code
rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='test_link')
I get the following error
AttributeError: 'Redis' object has no attribute 'module_list'.
Note: I'm trying to run redis locally on windows subsystem ubuntu.
Please help. | https://github.com/langchain-ai/langchain/issues/3893 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-01T17:02:43Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | """Check if Redis index exists."""
try:
client.ft(index_name).info()
except:
logger.info("Index does not exist")
return False
logger.info("Index already exists")
return True
def _redis_key(prefix: str) -> str:
"""Redis key schema for a given prefix."""
return f"{prefix}:{uuid.uuid4().hex}"
def _redis_prefix(index_name: str) -> str:
"""Redis key prefix for a given index."""
return f"doc:{index_name}"
def _default_relevance_score(val: float) -> float:
return 1 - val
class Redis(VectorStore): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,893 | AttributeError: 'Redis' object has no attribute 'module_list' | I'm trying to make a vectorstore using redis and store the embeddings in redis.
When I write the code
rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='test_link')
I get the following error
AttributeError: 'Redis' object has no attribute 'module_list'.
Note: I'm trying to run redis locally on windows subsystem ubuntu.
Please help. | https://github.com/langchain-ai/langchain/issues/3893 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-01T17:02:43Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | """Wrapper around Redis vector database.
To use, you should have the ``redis`` python package installed.
Example:
.. code-block:: python
from langchain.vectorstores import Redis
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = Redis(
redis_url="redis://username:password@localhost:6379"
index_name="my-index",
embedding_function=embeddings.embed_query,
)
"""
def __init__(
self,
redis_url: str,
index_name: str,
embedding_function: Callable,
content_key: str = "content",
metadata_key: str = "metadata",
vector_key: str = "content_vector",
relevance_score_fn: Optional[ |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,893 | AttributeError: 'Redis' object has no attribute 'module_list' | I'm trying to make a vectorstore using redis and store the embeddings in redis.
When I write the code
rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='test_link')
I get the following error
AttributeError: 'Redis' object has no attribute 'module_list'.
Note: I'm trying to run redis locally on windows subsystem ubuntu.
Please help. | https://github.com/langchain-ai/langchain/issues/3893 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-01T17:02:43Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | Callable[[float], float]
] = _default_relevance_score,
**kwargs: Any,
):
"""Initialize with necessary components."""
try:
import redis
except ImportError:
raise ValueError(
"Could not import redis python package. "
"Please install it with `pip install redis`."
)
self.embedding_function = embedding_function
self.index_name = index_name
try:
redis_client = redis.from_url(redis_url, **kwargs)
_check_redis_module_exist(redis_client, REDIS_REQUIRED_MODULES)
except ValueError as e:
raise ValueError(f"Redis failed to connect: {e}")
self.client = redis_client
self.content_key = content_key
self.metadata_key = metadata_key
self.vector_key = vector_key
self.relevance_score_fn = relevance_score_fn
def _create_index(
self, dim: int = 1536, distance_metric: REDIS_DISTANCE_METRICS = "COSINE"
) -> None:
try: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,893 | AttributeError: 'Redis' object has no attribute 'module_list' | I'm trying to make a vectorstore using redis and store the embeddings in redis.
When I write the code
rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='test_link')
I get the following error
AttributeError: 'Redis' object has no attribute 'module_list'.
Note: I'm trying to run redis locally on windows subsystem ubuntu.
Please help. | https://github.com/langchain-ai/langchain/issues/3893 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-01T17:02:43Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | from redis.commands.search.field import TextField, VectorField
from redis.commands.search.indexDefinition import IndexDefinition, IndexType
except ImportError:
raise ValueError(
"Could not import redis python package. "
"Please install it with `pip install redis`."
)
if not _check_index_exists(self.client, self.index_name):
schema = (
TextField(name=self.content_key),
TextField(name=self.metadata_key),
VectorField(
self.vector_key,
"FLAT",
{
"TYPE": "FLOAT32",
"DIM": dim,
"DISTANCE_METRIC": distance_metric,
},
),
)
prefix = _redis_prefix(self.index_name)
self.client.ft(self.index_name).create_index(
fields=schema,
definition=IndexDefinition(prefix=[prefix], index_type=IndexType.HASH),
)
def add_texts( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,893 | AttributeError: 'Redis' object has no attribute 'module_list' | I'm trying to make a vectorstore using redis and store the embeddings in redis.
When I write the code
rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='test_link')
I get the following error
AttributeError: 'Redis' object has no attribute 'module_list'.
Note: I'm trying to run redis locally on windows subsystem ubuntu.
Please help. | https://github.com/langchain-ai/langchain/issues/3893 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-01T17:02:43Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
embeddings: Optional[List[List[float]]] = None,
keys: Optional[List[str]] = None,
batch_size: int = 1000,
**kwargs: Any,
) -> List[str]:
"""Add more texts to the vectorstore.
Args:
texts (Iterable[str]): Iterable of strings/text to add to the vectorstore.
metadatas (Optional[List[dict]], optional): Optional list of metadatas.
Defaults to None.
embeddings (Optional[List[List[float]]], optional): Optional pre-generated
embeddings. Defaults to None.
keys (Optional[List[str]], optional): Optional key values to use as ids.
Defaults to None.
batch_size (int, optional): Batch size to use for writes. Defaults to 1000.
Returns:
List[str]: List of ids added to the vectorstore
"""
ids = []
prefix = _redis_prefix(self.index_name)
pipeline = self.client.pipeline(transaction=False)
for i, text in enumerate(texts):
key = keys[i] if keys else _redis_key(prefix)
metadata = metadatas[i] if metadatas else {} |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,893 | AttributeError: 'Redis' object has no attribute 'module_list' | I'm trying to make a vectorstore using redis and store the embeddings in redis.
When I write the code
rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='test_link')
I get the following error
AttributeError: 'Redis' object has no attribute 'module_list'.
Note: I'm trying to run redis locally on windows subsystem ubuntu.
Please help. | https://github.com/langchain-ai/langchain/issues/3893 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-01T17:02:43Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | embedding = embeddings[i] if embeddings else self.embedding_function(text)
pipeline.hset(
key,
mapping={
self.content_key: text,
self.vector_key: np.array(embedding, dtype=np.float32).tobytes(),
self.metadata_key: json.dumps(metadata),
},
)
ids.append(key)
if i % batch_size == 0:
pipeline.execute()
pipeline.execute()
return ids
def similarity_search(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Document]:
"""
Returns the most similar indexed documents to the query text.
Args:
query (str): The query text for which to find similar documents.
k (int): The number of documents to return. Default is 4.
Returns:
List[Document]: A list of documents that are most similar to the query text.
"""
docs_and_scores = self.similarity_search_with_score(query, k=k)
return [doc for doc, _ in docs_and_scores]
def similarity_search_limit_score( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,893 | AttributeError: 'Redis' object has no attribute 'module_list' | I'm trying to make a vectorstore using redis and store the embeddings in redis.
When I write the code
rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='test_link')
I get the following error
AttributeError: 'Redis' object has no attribute 'module_list'.
Note: I'm trying to run redis locally on windows subsystem ubuntu.
Please help. | https://github.com/langchain-ai/langchain/issues/3893 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-01T17:02:43Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | self, query: str, k: int = 4, score_threshold: float = 0.2, **kwargs: Any
) -> List[Document]:
"""
Returns the most similar indexed documents to the query text within the
score_threshold range.
Args:
query (str): The query text for which to find similar documents.
k (int): The number of documents to return. Default is 4.
score_threshold (float): The minimum matching score required for a document
to be considered a match. Defaults to 0.2.
Because the similarity calculation algorithm is based on cosine similarity,
the smaller the angle, the higher the similarity.
Returns:
List[Document]: A list of documents that are most similar to the query text,
including the match score for each document.
Note:
If there are no documents that satisfy the score_threshold value,
an empty list is returned.
"""
docs_and_scores = self.similarity_search_with_score(query, k=k)
return [doc for doc, score in docs_and_scores if score < score_threshold]
def _prepare_query(self, k: int) -> Query: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,893 | AttributeError: 'Redis' object has no attribute 'module_list' | I'm trying to make a vectorstore using redis and store the embeddings in redis.
When I write the code
rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='test_link')
I get the following error
AttributeError: 'Redis' object has no attribute 'module_list'.
Note: I'm trying to run redis locally on windows subsystem ubuntu.
Please help. | https://github.com/langchain-ai/langchain/issues/3893 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-01T17:02:43Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | try:
from redis.commands.search.query import Query
except ImportError:
raise ValueError(
"Could not import redis python package. "
"Please install it with `pip install redis`."
)
hybrid_fields = "*"
base_query = (
f"{hybrid_fields}=>[KNN {k} @{self.vector_key} $vector AS vector_score]"
)
return_fields = [self.metadata_key, self.content_key, "vector_score"]
return (
Query(base_query)
.return_fields(*return_fields)
.sort_by("vector_score")
.paging(0, k)
.dialect(2)
)
def similarity_search_with_score( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,893 | AttributeError: 'Redis' object has no attribute 'module_list' | I'm trying to make a vectorstore using redis and store the embeddings in redis.
When I write the code
rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='test_link')
I get the following error
AttributeError: 'Redis' object has no attribute 'module_list'.
Note: I'm trying to run redis locally on windows subsystem ubuntu.
Please help. | https://github.com/langchain-ai/langchain/issues/3893 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-01T17:02:43Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | self, query: str, k: int = 4
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents most similar to the query and score for each
"""
embedding = self.embedding_function(query)
redis_query = self._prepare_query(k)
params_dict: Mapping[str, str] = {
"vector": np.array(embedding)
.astype(dtype=np.float32)
.tobytes()
} |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,893 | AttributeError: 'Redis' object has no attribute 'module_list' | I'm trying to make a vectorstore using redis and store the embeddings in redis.
When I write the code
rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='test_link')
I get the following error
AttributeError: 'Redis' object has no attribute 'module_list'.
Note: I'm trying to run redis locally on windows subsystem ubuntu.
Please help. | https://github.com/langchain-ai/langchain/issues/3893 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-01T17:02:43Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | results = self.client.ft(self.index_name).search(redis_query, params_dict)
docs = [
(
Document(
page_content=result.content, metadata=json.loads(result.metadata)
),
float(result.vector_score),
)
for result in results.docs
]
return docs
def _similarity_search_with_relevance_scores(
self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs and relevance scores, normalized on a scale from 0 to 1.
0 is dissimilar, 1 is most similar.
"""
if self.relevance_score_fn is None:
raise ValueError(
"relevance_score_fn must be provided to"
" Redis constructor to normalize scores"
)
docs_and_scores = self.similarity_search_with_score(query, k=k)
return [(doc, self.relevance_score_fn(score)) for doc, score in docs_and_scores]
@classmethod
def from_texts_return_keys( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,893 | AttributeError: 'Redis' object has no attribute 'module_list' | I'm trying to make a vectorstore using redis and store the embeddings in redis.
When I write the code
rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='test_link')
I get the following error
AttributeError: 'Redis' object has no attribute 'module_list'.
Note: I'm trying to run redis locally on windows subsystem ubuntu.
Please help. | https://github.com/langchain-ai/langchain/issues/3893 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-01T17:02:43Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | cls: Type[Redis],
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
index_name: Optional[str] = None,
content_key: str = "content",
metadata_key: str = "metadata",
vector_key: str = "content_vector",
distance_metric: REDIS_DISTANCE_METRICS = "COSINE",
**kwargs: Any,
) -> Tuple[Redis, List[str]]:
"""Create a Redis vectorstore from raw documents.
This is a user-friendly interface that:
1. Embeds documents.
2. Creates a new index for the embeddings in Redis.
3. Adds the documents to the newly created Redis index.
This is intended to be a quick way to get started.
Example:
.. code-block:: python
from langchain.vectorstores import Redis
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
redisearch = RediSearch.from_texts( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,893 | AttributeError: 'Redis' object has no attribute 'module_list' | I'm trying to make a vectorstore using redis and store the embeddings in redis.
When I write the code
rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='test_link')
I get the following error
AttributeError: 'Redis' object has no attribute 'module_list'.
Note: I'm trying to run redis locally on windows subsystem ubuntu.
Please help. | https://github.com/langchain-ai/langchain/issues/3893 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-01T17:02:43Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | texts,
embeddings,
redis_url="redis://username:password@localhost:6379"
)
"""
redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL")
if "redis_url" in kwargs:
kwargs.pop("redis_url")
if not index_name:
index_name = uuid.uuid4().hex
instance = cls(
redis_url,
index_name,
embedding.embed_query,
content_key=content_key,
metadata_key=metadata_key,
vector_key=vector_key,
**kwargs,
)
embeddings = embedding.embed_documents(texts)
instance._create_index(dim=len(embeddings[0]), distance_metric=distance_metric)
keys = instance.add_texts(texts, metadatas, embeddings)
return instance, keys
@classmethod
def from_texts( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,893 | AttributeError: 'Redis' object has no attribute 'module_list' | I'm trying to make a vectorstore using redis and store the embeddings in redis.
When I write the code
rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='test_link')
I get the following error
AttributeError: 'Redis' object has no attribute 'module_list'.
Note: I'm trying to run redis locally on windows subsystem ubuntu.
Please help. | https://github.com/langchain-ai/langchain/issues/3893 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-01T17:02:43Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | cls: Type[Redis],
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None,
index_name: Optional[str] = None,
content_key: str = "content",
metadata_key: str = "metadata",
vector_key: str = "content_vector",
**kwargs: Any,
) -> Redis: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,893 | AttributeError: 'Redis' object has no attribute 'module_list' | I'm trying to make a vectorstore using redis and store the embeddings in redis.
When I write the code
rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='test_link')
I get the following error
AttributeError: 'Redis' object has no attribute 'module_list'.
Note: I'm trying to run redis locally on windows subsystem ubuntu.
Please help. | https://github.com/langchain-ai/langchain/issues/3893 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-01T17:02:43Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | """Create a Redis vectorstore from raw documents.
This is a user-friendly interface that:
1. Embeds documents.
2. Creates a new index for the embeddings in Redis.
3. Adds the documents to the newly created Redis index.
This is intended to be a quick way to get started.
Example:
.. code-block:: python
from langchain.vectorstores import Redis
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
redisearch = RediSearch.from_texts(
texts,
embeddings,
redis_url="redis://username:password@localhost:6379"
)
"""
instance, _ = cls.from_texts_return_keys(
texts,
embedding,
metadatas=metadatas,
index_name=index_name,
content_key=content_key,
metadata_key=metadata_key,
vector_key=vector_key,
**kwargs,
)
return instance
@staticmethod
def drop_index( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,893 | AttributeError: 'Redis' object has no attribute 'module_list' | I'm trying to make a vectorstore using redis and store the embeddings in redis.
When I write the code
rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='test_link')
I get the following error
AttributeError: 'Redis' object has no attribute 'module_list'.
Note: I'm trying to run redis locally on windows subsystem ubuntu.
Please help. | https://github.com/langchain-ai/langchain/issues/3893 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-01T17:02:43Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | index_name: str,
delete_documents: bool,
**kwargs: Any,
) -> bool:
"""
Drop a Redis search index.
Args:
index_name (str): Name of the index to drop.
delete_documents (bool): Whether to drop the associated documents.
Returns:
bool: Whether or not the drop was successful.
"""
redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL")
try:
import redis
except ImportError:
raise ValueError(
"Could not import redis python package. "
"Please install it with `pip install redis`."
)
try:
if "redis_url" in kwargs:
kwargs.pop("redis_url")
client = redis.from_url(url=redis_url, **kwargs)
except ValueError as e:
raise ValueError(f"Your redis connected error: {e}")
try: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,893 | AttributeError: 'Redis' object has no attribute 'module_list' | I'm trying to make a vectorstore using redis and store the embeddings in redis.
When I write the code
rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='test_link')
I get the following error
AttributeError: 'Redis' object has no attribute 'module_list'.
Note: I'm trying to run redis locally on windows subsystem ubuntu.
Please help. | https://github.com/langchain-ai/langchain/issues/3893 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-01T17:02:43Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | client.ft(index_name).dropindex(delete_documents)
logger.info("Drop index")
return True
except:
return False
@classmethod
def from_existing_index(
cls,
embedding: Embeddings,
index_name: str,
content_key: str = "content",
metadata_key: str = "metadata",
vector_key: str = "content_vector",
**kwargs: Any,
) -> Redis:
"""Connect to an existing Redis index."""
redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL")
try:
import redis
except ImportError:
raise ValueError(
"Could not import redis python package. "
"Please install it with `pip install redis`."
)
try:
if "redis_url" in kwargs:
kwargs.pop("redis_url") |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,893 | AttributeError: 'Redis' object has no attribute 'module_list' | I'm trying to make a vectorstore using redis and store the embeddings in redis.
When I write the code
rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='test_link')
I get the following error
AttributeError: 'Redis' object has no attribute 'module_list'.
Note: I'm trying to run redis locally on windows subsystem ubuntu.
Please help. | https://github.com/langchain-ai/langchain/issues/3893 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-01T17:02:43Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | client = redis.from_url(url=redis_url, **kwargs)
_check_redis_module_exist(client, REDIS_REQUIRED_MODULES)
assert _check_index_exists(
client, index_name
), f"Index {index_name} does not exist"
except Exception as e:
raise ValueError(f"Redis failed to connect: {e}")
return cls(
redis_url,
index_name,
embedding.embed_query,
content_key=content_key,
metadata_key=metadata_key,
vector_key=vector_key,
**kwargs,
)
def as_retriever(self, **kwargs: Any) -> RedisVectorStoreRetriever:
return RedisVectorStoreRetriever(vectorstore=self, **kwargs)
class RedisVectorStoreRetriever(VectorStoreRetriever, BaseModel):
vectorstore: Redis
search_type: str = "similarity"
k: int = 4
score_threshold: float = 0.4
class Config:
"""Configuration for this pydantic object."""
arbitrary_types_allowed = True
@root_validator()
def validate_search_type(cls, values: Dict) -> Dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,893 | AttributeError: 'Redis' object has no attribute 'module_list' | I'm trying to make a vectorstore using redis and store the embeddings in redis.
When I write the code
rds = Redis.from_documents(docs, embeddings, redis_url="redis://localhost:6379", index_name='test_link')
I get the following error
AttributeError: 'Redis' object has no attribute 'module_list'.
Note: I'm trying to run redis locally on windows subsystem ubuntu.
Please help. | https://github.com/langchain-ai/langchain/issues/3893 | https://github.com/langchain-ai/langchain/pull/4932 | a87a2524c7f8f55846712a682ffc80b5fc224b73 | 616e9a93e08f4f042c492b89545e85e80592ffbe | "2023-05-01T17:02:43Z" | python | "2023-05-19T20:02:03Z" | langchain/vectorstores/redis.py | """Validate search type."""
if "search_type" in values:
search_type = values["search_type"]
if search_type not in ("similarity", "similarity_limit"):
raise ValueError(f"search_type of {search_type} not allowed.")
return values
def get_relevant_documents(self, query: str) -> List[Document]:
if self.search_type == "similarity":
docs = self.vectorstore.similarity_search(query, k=self.k)
elif self.search_type == "similarity_limit":
docs = self.vectorstore.similarity_search_limit_score(
query, k=self.k, score_threshold=self.score_threshold
)
else:
raise ValueError(f"search_type of {self.search_type} not allowed.")
return docs
async def aget_relevant_documents(self, query: str) -> List[Document]:
raise NotImplementedError("RedisVectorStoreRetriever does not support async")
def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]:
"""Add documents to vectorstore."""
return self.vectorstore.add_documents(documents, **kwargs)
async def aadd_documents(
self, documents: List[Document], **kwargs: Any
) -> List[str]:
"""Add documents to vectorstore."""
return await self.vectorstore.aadd_documents(documents, **kwargs) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,825 | TypeError: unsupported operand type(s) for -: 'datetime.datetime' and 'NoneType' | ### System Info
langchain version 0.0.171
python version 3.9.13
macos
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This is a problem with the generative agents.
To reproduce please follow the tutorial outlines here:
https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html
When you get to the following line of code you will get an error:
`print(tommie.get_summary(force_refresh=True))`
```
File ~/.pyenv/versions/3.9.13/lib/python3.9/site-packages/langchain/retrievers/time_weighted_retriever.py:14, in _get_hours_passed(time, ref_time)
12 def _get_hours_passed(time: datetime.datetime, ref_time: datetime.datetime) -> float:
13 """Get the hours passed between two datetime objects."""
---> 14 return (time - ref_time).total_seconds() / 3600
TypeError: unsupported operand type(s) for -: 'datetime.datetime' and 'NoneType'
```
### Expected behavior
The ref time should be a datetime and tommies summary should be printed. | https://github.com/langchain-ai/langchain/issues/4825 | https://github.com/langchain-ai/langchain/pull/5045 | c28cc0f1ac5a1ddd6a9dbb7d6792bb0f4ab0087d | e173e032bcceae3a7d3bb400c34d554f04be14ca | "2023-05-17T02:24:24Z" | python | "2023-05-22T22:47:03Z" | langchain/retrievers/time_weighted_retriever.py | """Retriever that combines embedding similarity with recency in retrieving values."""
import datetime
from copy import deepcopy
from typing import Any, Dict, List, Optional, Tuple
from pydantic import BaseModel, Field
from langchain.schema import BaseRetriever, Document
from langchain.vectorstores.base import VectorStore
def _get_hours_passed(time: datetime.datetime, ref_time: datetime.datetime) -> float:
"""Get the hours passed between two datetime objects."""
return (time - ref_time).total_seconds() / 3600
class TimeWeightedVectorStoreRetriever(BaseRetriever, BaseModel):
"""Retriever combining embedding similarity with recency."""
vectorstore: VectorStore
"""The vectorstore to store documents and determine salience."""
search_kwargs: dict = Field(default_factory=lambda: dict(k=100))
"""Keyword arguments to pass to the vectorstore similarity search."""
memory_stream: List[Document] = Field(default_factory=list)
"""The memory_stream of documents to search through."""
decay_rate: float = Field(default=0.01)
"""The exponential decay factor used as (1.0-decay_rate)**(hrs_passed)."""
k: int = 4
"""The maximum number of documents to retrieve in a given call."""
other_score_keys: List[str] = []
"""Other keys in the metadata to factor into the score, e.g. 'importance'."""
default_salience: Optional[float] = None
"""The salience to assign memories not retrieved from the vector store.
None assigns no salience to documents not fetched from the vector store.
"""
class Config: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,825 | TypeError: unsupported operand type(s) for -: 'datetime.datetime' and 'NoneType' | ### System Info
langchain version 0.0.171
python version 3.9.13
macos
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This is a problem with the generative agents.
To reproduce please follow the tutorial outlines here:
https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html
When you get to the following line of code you will get an error:
`print(tommie.get_summary(force_refresh=True))`
```
File ~/.pyenv/versions/3.9.13/lib/python3.9/site-packages/langchain/retrievers/time_weighted_retriever.py:14, in _get_hours_passed(time, ref_time)
12 def _get_hours_passed(time: datetime.datetime, ref_time: datetime.datetime) -> float:
13 """Get the hours passed between two datetime objects."""
---> 14 return (time - ref_time).total_seconds() / 3600
TypeError: unsupported operand type(s) for -: 'datetime.datetime' and 'NoneType'
```
### Expected behavior
The ref time should be a datetime and tommies summary should be printed. | https://github.com/langchain-ai/langchain/issues/4825 | https://github.com/langchain-ai/langchain/pull/5045 | c28cc0f1ac5a1ddd6a9dbb7d6792bb0f4ab0087d | e173e032bcceae3a7d3bb400c34d554f04be14ca | "2023-05-17T02:24:24Z" | python | "2023-05-22T22:47:03Z" | langchain/retrievers/time_weighted_retriever.py | """Configuration for this pydantic object."""
arbitrary_types_allowed = True
def _get_combined_score(
self,
document: Document,
vector_relevance: Optional[float],
current_time: datetime.datetime,
) -> float:
"""Return the combined score for a document."""
hours_passed = _get_hours_passed(
current_time,
document.metadata["last_accessed_at"],
)
score = (1.0 - self.decay_rate) ** hours_passed
for key in self.other_score_keys:
if key in document.metadata:
score += document.metadata[key]
if vector_relevance is not None:
score += vector_relevance
return score
def get_salient_docs(self, query: str) -> Dict[int, Tuple[Document, float]]:
"""Return documents that are salient to the query."""
docs_and_scores: List[Tuple[Document, float]]
docs_and_scores = self.vectorstore.similarity_search_with_relevance_scores(
query, **self.search_kwargs
)
results = {}
for fetched_doc, relevance in docs_and_scores:
if "buffer_idx" in fetched_doc.metadata: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,825 | TypeError: unsupported operand type(s) for -: 'datetime.datetime' and 'NoneType' | ### System Info
langchain version 0.0.171
python version 3.9.13
macos
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This is a problem with the generative agents.
To reproduce please follow the tutorial outlines here:
https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html
When you get to the following line of code you will get an error:
`print(tommie.get_summary(force_refresh=True))`
```
File ~/.pyenv/versions/3.9.13/lib/python3.9/site-packages/langchain/retrievers/time_weighted_retriever.py:14, in _get_hours_passed(time, ref_time)
12 def _get_hours_passed(time: datetime.datetime, ref_time: datetime.datetime) -> float:
13 """Get the hours passed between two datetime objects."""
---> 14 return (time - ref_time).total_seconds() / 3600
TypeError: unsupported operand type(s) for -: 'datetime.datetime' and 'NoneType'
```
### Expected behavior
The ref time should be a datetime and tommies summary should be printed. | https://github.com/langchain-ai/langchain/issues/4825 | https://github.com/langchain-ai/langchain/pull/5045 | c28cc0f1ac5a1ddd6a9dbb7d6792bb0f4ab0087d | e173e032bcceae3a7d3bb400c34d554f04be14ca | "2023-05-17T02:24:24Z" | python | "2023-05-22T22:47:03Z" | langchain/retrievers/time_weighted_retriever.py | buffer_idx = fetched_doc.metadata["buffer_idx"]
doc = self.memory_stream[buffer_idx]
results[buffer_idx] = (doc, relevance)
return results
def get_relevant_documents(self, query: str) -> List[Document]:
"""Return documents that are relevant to the query."""
current_time = datetime.datetime.now()
docs_and_scores = {
doc.metadata["buffer_idx"]: (doc, self.default_salience)
for doc in self.memory_stream[-self.k :]
}
docs_and_scores.update(self.get_salient_docs(query))
rescored_docs = [
(doc, self._get_combined_score(doc, relevance, current_time))
for doc, relevance in docs_and_scores.values()
]
rescored_docs.sort(key=lambda x: x[1], reverse=True)
result = []
for doc, _ in rescored_docs[: self.k]:
buffered_doc = self.memory_stream[doc.metadata["buffer_idx"]]
buffered_doc.metadata["last_accessed_at"] = current_time
result.append(buffered_doc)
return result
async def aget_relevant_documents(self, query: str) -> List[Document]:
"""Return documents that are relevant to the query."""
raise NotImplementedError
def add_documents(self, documents: List[Document], **kwargs: Any) -> List[str]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,825 | TypeError: unsupported operand type(s) for -: 'datetime.datetime' and 'NoneType' | ### System Info
langchain version 0.0.171
python version 3.9.13
macos
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This is a problem with the generative agents.
To reproduce please follow the tutorial outlines here:
https://python.langchain.com/en/latest/use_cases/agent_simulations/characters.html
When you get to the following line of code you will get an error:
`print(tommie.get_summary(force_refresh=True))`
```
File ~/.pyenv/versions/3.9.13/lib/python3.9/site-packages/langchain/retrievers/time_weighted_retriever.py:14, in _get_hours_passed(time, ref_time)
12 def _get_hours_passed(time: datetime.datetime, ref_time: datetime.datetime) -> float:
13 """Get the hours passed between two datetime objects."""
---> 14 return (time - ref_time).total_seconds() / 3600
TypeError: unsupported operand type(s) for -: 'datetime.datetime' and 'NoneType'
```
### Expected behavior
The ref time should be a datetime and tommies summary should be printed. | https://github.com/langchain-ai/langchain/issues/4825 | https://github.com/langchain-ai/langchain/pull/5045 | c28cc0f1ac5a1ddd6a9dbb7d6792bb0f4ab0087d | e173e032bcceae3a7d3bb400c34d554f04be14ca | "2023-05-17T02:24:24Z" | python | "2023-05-22T22:47:03Z" | langchain/retrievers/time_weighted_retriever.py | """Add documents to vectorstore."""
current_time = kwargs.get("current_time", datetime.datetime.now())
dup_docs = [deepcopy(d) for d in documents]
for i, doc in enumerate(dup_docs):
if "last_accessed_at" not in doc.metadata:
doc.metadata["last_accessed_at"] = current_time
if "created_at" not in doc.metadata:
doc.metadata["created_at"] = current_time
doc.metadata["buffer_idx"] = len(self.memory_stream) + i
self.memory_stream.extend(dup_docs)
return self.vectorstore.add_documents(dup_docs, **kwargs)
async def aadd_documents(
self, documents: List[Document], **kwargs: Any
) -> List[str]:
"""Add documents to vectorstore."""
current_time = kwargs.get("current_time", datetime.datetime.now())
dup_docs = [deepcopy(d) for d in documents]
for i, doc in enumerate(dup_docs):
if "last_accessed_at" not in doc.metadata:
doc.metadata["last_accessed_at"] = current_time
if "created_at" not in doc.metadata:
doc.metadata["created_at"] = current_time
doc.metadata["buffer_idx"] = len(self.memory_stream) + i
self.memory_stream.extend(dup_docs)
return await self.vectorstore.aadd_documents(dup_docs, **kwargs) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,072 | Add option to use _additional fields while executing a Weaviate query | ### Feature request
Weaviate has the option to pass _additional field while executing a query
https://weaviate.io/developers/weaviate/api/graphql/additional-properties
It would be good to be able to use this feature and add the response to the results. It is a small change, without breaking the API. We can use the kwargs argument, similar to where_filter in the python class weaviate.py.
### Motivation
When comparing and understanding query results, using certainty is a good way.
### Your contribution
I like to contribute to the PR. As it would be my first contribution, I need to understand the integration tests and build the project, and I already tested the change in my local code sample. | https://github.com/langchain-ai/langchain/issues/5072 | https://github.com/langchain-ai/langchain/pull/5085 | 87bba2e8d3a7772a32eda45bc17160f4ad8ae3d2 | b95002289409077965d99636b15a45300d9c0b9d | "2023-05-21T22:37:40Z" | python | "2023-05-23T01:57:10Z" | langchain/vectorstores/weaviate.py | """Wrapper around weaviate vector database."""
from __future__ import annotations
import datetime
from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Type
from uuid import uuid4
import numpy as np
from langchain.docstore.document import Document
from langchain.embeddings.base import Embeddings
from langchain.utils import get_from_dict_or_env
from langchain.vectorstores.base import VectorStore
from langchain.vectorstores.utils import maximal_marginal_relevance
def _default_schema(index_name: str) -> Dict:
return {
"class": index_name,
"properties": [
{
"name": "text",
"dataType": ["text"],
}
],
}
def _create_weaviate_client(**kwargs: Any) -> Any: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,072 | Add option to use _additional fields while executing a Weaviate query | ### Feature request
Weaviate has the option to pass _additional field while executing a query
https://weaviate.io/developers/weaviate/api/graphql/additional-properties
It would be good to be able to use this feature and add the response to the results. It is a small change, without breaking the API. We can use the kwargs argument, similar to where_filter in the python class weaviate.py.
### Motivation
When comparing and understanding query results, using certainty is a good way.
### Your contribution
I like to contribute to the PR. As it would be my first contribution, I need to understand the integration tests and build the project, and I already tested the change in my local code sample. | https://github.com/langchain-ai/langchain/issues/5072 | https://github.com/langchain-ai/langchain/pull/5085 | 87bba2e8d3a7772a32eda45bc17160f4ad8ae3d2 | b95002289409077965d99636b15a45300d9c0b9d | "2023-05-21T22:37:40Z" | python | "2023-05-23T01:57:10Z" | langchain/vectorstores/weaviate.py | client = kwargs.get("client")
if client is not None:
return client
weaviate_url = get_from_dict_or_env(kwargs, "weaviate_url", "WEAVIATE_URL")
try:
weaviate_api_key = get_from_dict_or_env(
kwargs, "weaviate_api_key", "WEAVIATE_API_KEY", None
)
except ValueError:
weaviate_api_key = None
try:
import weaviate
except ImportError:
raise ValueError(
"Could not import weaviate python package. "
"Please install it with `pip instal weaviate-client`"
)
auth = (
weaviate.auth.AuthApiKey(api_key=weaviate_api_key)
if weaviate_api_key is not None
else None
)
client = weaviate.Client(weaviate_url, auth_client_secret=auth)
return client
def _default_score_normalizer(val: float) -> float: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,072 | Add option to use _additional fields while executing a Weaviate query | ### Feature request
Weaviate has the option to pass _additional field while executing a query
https://weaviate.io/developers/weaviate/api/graphql/additional-properties
It would be good to be able to use this feature and add the response to the results. It is a small change, without breaking the API. We can use the kwargs argument, similar to where_filter in the python class weaviate.py.
### Motivation
When comparing and understanding query results, using certainty is a good way.
### Your contribution
I like to contribute to the PR. As it would be my first contribution, I need to understand the integration tests and build the project, and I already tested the change in my local code sample. | https://github.com/langchain-ai/langchain/issues/5072 | https://github.com/langchain-ai/langchain/pull/5085 | 87bba2e8d3a7772a32eda45bc17160f4ad8ae3d2 | b95002289409077965d99636b15a45300d9c0b9d | "2023-05-21T22:37:40Z" | python | "2023-05-23T01:57:10Z" | langchain/vectorstores/weaviate.py | return 1 - 1 / (1 + np.exp(val))
def _json_serializable(value: Any) -> Any:
if isinstance(value, datetime.datetime):
return value.isoformat()
return value
class Weaviate(VectorStore):
"""Wrapper around Weaviate vector database.
To use, you should have the ``weaviate-client`` python package installed.
Example:
.. code-block:: python
import weaviate
from langchain.vectorstores import Weaviate
client = weaviate.Client(url=os.environ["WEAVIATE_URL"], ...)
weaviate = Weaviate(client, index_name, text_key)
"""
def __init__( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,072 | Add option to use _additional fields while executing a Weaviate query | ### Feature request
Weaviate has the option to pass _additional field while executing a query
https://weaviate.io/developers/weaviate/api/graphql/additional-properties
It would be good to be able to use this feature and add the response to the results. It is a small change, without breaking the API. We can use the kwargs argument, similar to where_filter in the python class weaviate.py.
### Motivation
When comparing and understanding query results, using certainty is a good way.
### Your contribution
I like to contribute to the PR. As it would be my first contribution, I need to understand the integration tests and build the project, and I already tested the change in my local code sample. | https://github.com/langchain-ai/langchain/issues/5072 | https://github.com/langchain-ai/langchain/pull/5085 | 87bba2e8d3a7772a32eda45bc17160f4ad8ae3d2 | b95002289409077965d99636b15a45300d9c0b9d | "2023-05-21T22:37:40Z" | python | "2023-05-23T01:57:10Z" | langchain/vectorstores/weaviate.py | self,
client: Any,
index_name: str,
text_key: str,
embedding: Optional[Embeddings] = None,
attributes: Optional[List[str]] = None,
relevance_score_fn: Optional[
Callable[[float], float]
] = _default_score_normalizer,
by_text: bool = True,
):
"""Initialize with Weaviate client."""
try:
import weaviate
except ImportError:
raise ValueError(
"Could not import weaviate python package. "
"Please install it with `pip install weaviate-client`."
)
if not isinstance(client, weaviate.Client):
raise ValueError(
f"client should be an instance of weaviate.Client, got {type(client)}"
)
self._client = client
self._index_name = index_name |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,072 | Add option to use _additional fields while executing a Weaviate query | ### Feature request
Weaviate has the option to pass _additional field while executing a query
https://weaviate.io/developers/weaviate/api/graphql/additional-properties
It would be good to be able to use this feature and add the response to the results. It is a small change, without breaking the API. We can use the kwargs argument, similar to where_filter in the python class weaviate.py.
### Motivation
When comparing and understanding query results, using certainty is a good way.
### Your contribution
I like to contribute to the PR. As it would be my first contribution, I need to understand the integration tests and build the project, and I already tested the change in my local code sample. | https://github.com/langchain-ai/langchain/issues/5072 | https://github.com/langchain-ai/langchain/pull/5085 | 87bba2e8d3a7772a32eda45bc17160f4ad8ae3d2 | b95002289409077965d99636b15a45300d9c0b9d | "2023-05-21T22:37:40Z" | python | "2023-05-23T01:57:10Z" | langchain/vectorstores/weaviate.py | self._embedding = embedding
self._text_key = text_key
self._query_attrs = [self._text_key]
self._relevance_score_fn = relevance_score_fn
self._by_text = by_text
if attributes is not None:
self._query_attrs.extend(attributes)
def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
**kwargs: Any,
) -> List[str]:
"""Upload texts with metadata (properties) to Weaviate."""
from weaviate.util import get_valid_uuid
ids = []
with self._client.batch as batch:
for i, text in enumerate(texts):
data_properties = {self._text_key: text}
if metadatas is not None:
for key, val in metadatas[i].items():
data_properties[key] = _json_serializable(val)
_id = (
kwargs["uuids"][i] if "uuids" in kwargs else get_valid_uuid(uuid4())
)
if self._embedding is not None:
vector = self._embedding.embed_documents([text])[0]
else: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,072 | Add option to use _additional fields while executing a Weaviate query | ### Feature request
Weaviate has the option to pass _additional field while executing a query
https://weaviate.io/developers/weaviate/api/graphql/additional-properties
It would be good to be able to use this feature and add the response to the results. It is a small change, without breaking the API. We can use the kwargs argument, similar to where_filter in the python class weaviate.py.
### Motivation
When comparing and understanding query results, using certainty is a good way.
### Your contribution
I like to contribute to the PR. As it would be my first contribution, I need to understand the integration tests and build the project, and I already tested the change in my local code sample. | https://github.com/langchain-ai/langchain/issues/5072 | https://github.com/langchain-ai/langchain/pull/5085 | 87bba2e8d3a7772a32eda45bc17160f4ad8ae3d2 | b95002289409077965d99636b15a45300d9c0b9d | "2023-05-21T22:37:40Z" | python | "2023-05-23T01:57:10Z" | langchain/vectorstores/weaviate.py | vector = None
batch.add_data_object(
data_object=data_properties,
class_name=self._index_name,
uuid=_id,
vector=vector,
)
ids.append(_id)
return ids
def similarity_search(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Document]:
"""Return docs most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents most similar to the query.
"""
if self._by_text:
return self.similarity_search_by_text(query, k, **kwargs)
else:
if self._embedding is None:
raise ValueError(
"_embedding cannot be None for similarity_search when "
"_by_text=False"
)
embedding = self._embedding.embed_query(query)
return self.similarity_search_by_vector(embedding, k, **kwargs)
def similarity_search_by_text( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,072 | Add option to use _additional fields while executing a Weaviate query | ### Feature request
Weaviate has the option to pass _additional field while executing a query
https://weaviate.io/developers/weaviate/api/graphql/additional-properties
It would be good to be able to use this feature and add the response to the results. It is a small change, without breaking the API. We can use the kwargs argument, similar to where_filter in the python class weaviate.py.
### Motivation
When comparing and understanding query results, using certainty is a good way.
### Your contribution
I like to contribute to the PR. As it would be my first contribution, I need to understand the integration tests and build the project, and I already tested the change in my local code sample. | https://github.com/langchain-ai/langchain/issues/5072 | https://github.com/langchain-ai/langchain/pull/5085 | 87bba2e8d3a7772a32eda45bc17160f4ad8ae3d2 | b95002289409077965d99636b15a45300d9c0b9d | "2023-05-21T22:37:40Z" | python | "2023-05-23T01:57:10Z" | langchain/vectorstores/weaviate.py | self, query: str, k: int = 4, **kwargs: Any
) -> List[Document]:
"""Return docs most similar to query.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
Returns:
List of Documents most similar to the query.
"""
content: Dict[str, Any] = {"concepts": [query]}
if kwargs.get("search_distance"):
content["certainty"] = kwargs.get("search_distance")
query_obj = self._client.query.get(self._index_name, self._query_attrs)
if kwargs.get("where_filter"):
query_obj = query_obj.with_where(kwargs.get("where_filter"))
result = query_obj.with_near_text(content).with_limit(k).do()
if "errors" in result:
raise ValueError(f"Error during query: {result['errors']}")
docs = []
for res in result["data"]["Get"][self._index_name]:
text = res.pop(self._text_key)
docs.append(Document(page_content=text, metadata=res))
return docs
def similarity_search_by_vector( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,072 | Add option to use _additional fields while executing a Weaviate query | ### Feature request
Weaviate has the option to pass _additional field while executing a query
https://weaviate.io/developers/weaviate/api/graphql/additional-properties
It would be good to be able to use this feature and add the response to the results. It is a small change, without breaking the API. We can use the kwargs argument, similar to where_filter in the python class weaviate.py.
### Motivation
When comparing and understanding query results, using certainty is a good way.
### Your contribution
I like to contribute to the PR. As it would be my first contribution, I need to understand the integration tests and build the project, and I already tested the change in my local code sample. | https://github.com/langchain-ai/langchain/issues/5072 | https://github.com/langchain-ai/langchain/pull/5085 | 87bba2e8d3a7772a32eda45bc17160f4ad8ae3d2 | b95002289409077965d99636b15a45300d9c0b9d | "2023-05-21T22:37:40Z" | python | "2023-05-23T01:57:10Z" | langchain/vectorstores/weaviate.py | self, embedding: List[float], k: int = 4, **kwargs: Any
) -> List[Document]:
"""Look up similar documents by embedding vector in Weaviate."""
vector = {"vector": embedding}
query_obj = self._client.query.get(self._index_name, self._query_attrs)
if kwargs.get("where_filter"):
query_obj = query_obj.with_where(kwargs.get("where_filter"))
result = query_obj.with_near_vector(vector).with_limit(k).do()
if "errors" in result:
raise ValueError(f"Error during query: {result['errors']}")
docs = []
for res in result["data"]["Get"][self._index_name]:
text = res.pop(self._text_key)
docs.append(Document(page_content=text, metadata=res))
return docs
def max_marginal_relevance_search(
self,
query: str,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
query: Text to look up documents similar to. |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,072 | Add option to use _additional fields while executing a Weaviate query | ### Feature request
Weaviate has the option to pass _additional field while executing a query
https://weaviate.io/developers/weaviate/api/graphql/additional-properties
It would be good to be able to use this feature and add the response to the results. It is a small change, without breaking the API. We can use the kwargs argument, similar to where_filter in the python class weaviate.py.
### Motivation
When comparing and understanding query results, using certainty is a good way.
### Your contribution
I like to contribute to the PR. As it would be my first contribution, I need to understand the integration tests and build the project, and I already tested the change in my local code sample. | https://github.com/langchain-ai/langchain/issues/5072 | https://github.com/langchain-ai/langchain/pull/5085 | 87bba2e8d3a7772a32eda45bc17160f4ad8ae3d2 | b95002289409077965d99636b15a45300d9c0b9d | "2023-05-21T22:37:40Z" | python | "2023-05-23T01:57:10Z" | langchain/vectorstores/weaviate.py | k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns:
List of Documents selected by maximal marginal relevance.
"""
if self._embedding is not None:
embedding = self._embedding.embed_query(query)
else:
raise ValueError(
"max_marginal_relevance_search requires a suitable Embeddings object"
)
return self.max_marginal_relevance_search_by_vector(
embedding, k=k, fetch_k=fetch_k, lambda_mult=lambda_mult, **kwargs
)
def max_marginal_relevance_search_by_vector(
self,
embedding: List[float],
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,072 | Add option to use _additional fields while executing a Weaviate query | ### Feature request
Weaviate has the option to pass _additional field while executing a query
https://weaviate.io/developers/weaviate/api/graphql/additional-properties
It would be good to be able to use this feature and add the response to the results. It is a small change, without breaking the API. We can use the kwargs argument, similar to where_filter in the python class weaviate.py.
### Motivation
When comparing and understanding query results, using certainty is a good way.
### Your contribution
I like to contribute to the PR. As it would be my first contribution, I need to understand the integration tests and build the project, and I already tested the change in my local code sample. | https://github.com/langchain-ai/langchain/issues/5072 | https://github.com/langchain-ai/langchain/pull/5085 | 87bba2e8d3a7772a32eda45bc17160f4ad8ae3d2 | b95002289409077965d99636b15a45300d9c0b9d | "2023-05-21T22:37:40Z" | python | "2023-05-23T01:57:10Z" | langchain/vectorstores/weaviate.py | embedding: Embedding to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
Returns:
List of Documents selected by maximal marginal relevance.
"""
vector = {"vector": embedding}
query_obj = self._client.query.get(self._index_name, self._query_attrs)
if kwargs.get("where_filter"):
query_obj = query_obj.with_where(kwargs.get("where_filter"))
results = (
query_obj.with_additional("vector")
.with_near_vector(vector)
.with_limit(fetch_k)
.do()
)
payload = results["data"]["Get"][self._index_name]
embeddings = [result["_additional"]["vector"] for result in payload]
mmr_selected = maximal_marginal_relevance(
np.array(embedding), embeddings, k=k, lambda_mult=lambda_mult
)
docs = []
for idx in mmr_selected:
text = payload[idx].pop(self._text_key)
payload[idx].pop("_additional")
meta = payload[idx] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,072 | Add option to use _additional fields while executing a Weaviate query | ### Feature request
Weaviate has the option to pass _additional field while executing a query
https://weaviate.io/developers/weaviate/api/graphql/additional-properties
It would be good to be able to use this feature and add the response to the results. It is a small change, without breaking the API. We can use the kwargs argument, similar to where_filter in the python class weaviate.py.
### Motivation
When comparing and understanding query results, using certainty is a good way.
### Your contribution
I like to contribute to the PR. As it would be my first contribution, I need to understand the integration tests and build the project, and I already tested the change in my local code sample. | https://github.com/langchain-ai/langchain/issues/5072 | https://github.com/langchain-ai/langchain/pull/5085 | 87bba2e8d3a7772a32eda45bc17160f4ad8ae3d2 | b95002289409077965d99636b15a45300d9c0b9d | "2023-05-21T22:37:40Z" | python | "2023-05-23T01:57:10Z" | langchain/vectorstores/weaviate.py | docs.append(Document(page_content=text, metadata=meta))
return docs
def similarity_search_with_score(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Tuple[Document, float]]:
if self._embedding is None:
raise ValueError(
"_embedding cannot be None for similarity_search_with_score"
)
content: Dict[str, Any] = {"concepts": [query]}
if kwargs.get("search_distance"):
content["certainty"] = kwargs.get("search_distance")
query_obj = self._client.query.get(self._index_name, self._query_attrs)
result = (
query_obj.with_near_text(content)
.with_limit(k)
.with_additional("vector")
.do()
)
if "errors" in result:
raise ValueError(f"Error during query: {result['errors']}")
docs_and_scores = []
for res in result["data"]["Get"][self._index_name]:
text = res.pop(self._text_key)
score = np.dot(
res["_additional"]["vector"], self._embedding.embed_query(query)
)
docs_and_scores.append((Document(page_content=text, metadata=res), score))
return docs_and_scores
def _similarity_search_with_relevance_scores( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,072 | Add option to use _additional fields while executing a Weaviate query | ### Feature request
Weaviate has the option to pass _additional field while executing a query
https://weaviate.io/developers/weaviate/api/graphql/additional-properties
It would be good to be able to use this feature and add the response to the results. It is a small change, without breaking the API. We can use the kwargs argument, similar to where_filter in the python class weaviate.py.
### Motivation
When comparing and understanding query results, using certainty is a good way.
### Your contribution
I like to contribute to the PR. As it would be my first contribution, I need to understand the integration tests and build the project, and I already tested the change in my local code sample. | https://github.com/langchain-ai/langchain/issues/5072 | https://github.com/langchain-ai/langchain/pull/5085 | 87bba2e8d3a7772a32eda45bc17160f4ad8ae3d2 | b95002289409077965d99636b15a45300d9c0b9d | "2023-05-21T22:37:40Z" | python | "2023-05-23T01:57:10Z" | langchain/vectorstores/weaviate.py | self,
query: str,
k: int = 4,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs and relevance scores, normalized on a scale from 0 to 1.
0 is dissimilar, 1 is most similar.
"""
if self._relevance_score_fn is None:
raise ValueError(
"relevance_score_fn must be provided to"
" Weaviate constructor to normalize scores"
)
docs_and_scores = self.similarity_search_with_score(query, k=k, **kwargs)
return [
(doc, self._relevance_score_fn(score)) for doc, score in docs_and_scores
]
@classmethod
def from_texts(
cls: Type[Weaviate],
texts: List[str],
embedding: Embeddings,
metadatas: Optional[List[dict]] = None, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,072 | Add option to use _additional fields while executing a Weaviate query | ### Feature request
Weaviate has the option to pass _additional field while executing a query
https://weaviate.io/developers/weaviate/api/graphql/additional-properties
It would be good to be able to use this feature and add the response to the results. It is a small change, without breaking the API. We can use the kwargs argument, similar to where_filter in the python class weaviate.py.
### Motivation
When comparing and understanding query results, using certainty is a good way.
### Your contribution
I like to contribute to the PR. As it would be my first contribution, I need to understand the integration tests and build the project, and I already tested the change in my local code sample. | https://github.com/langchain-ai/langchain/issues/5072 | https://github.com/langchain-ai/langchain/pull/5085 | 87bba2e8d3a7772a32eda45bc17160f4ad8ae3d2 | b95002289409077965d99636b15a45300d9c0b9d | "2023-05-21T22:37:40Z" | python | "2023-05-23T01:57:10Z" | langchain/vectorstores/weaviate.py | **kwargs: Any,
) -> Weaviate:
"""Construct Weaviate wrapper from raw documents.
This is a user-friendly interface that:
1. Embeds documents.
2. Creates a new index for the embeddings in the Weaviate instance.
3. Adds the documents to the newly created Weaviate index.
This is intended to be a quick way to get started.
Example:
.. code-block:: python
from langchain.vectorstores.weaviate import Weaviate
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
weaviate = Weaviate.from_texts(
texts,
embeddings,
weaviate_url="http://localhost:8080"
)
"""
client = _create_weaviate_client(**kwargs)
from weaviate.util import get_valid_uuid
index_name = kwargs.get("index_name", f"LangChain_{uuid4().hex}")
embeddings = embedding.embed_documents(texts) if embedding else None
text_key = "text"
schema = _default_schema(index_name)
attributes = list(metadatas[0].keys()) if metadatas else None
if not client.schema.contains(schema):
client.schema.create_class(schema)
with client.batch as batch: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,072 | Add option to use _additional fields while executing a Weaviate query | ### Feature request
Weaviate has the option to pass _additional field while executing a query
https://weaviate.io/developers/weaviate/api/graphql/additional-properties
It would be good to be able to use this feature and add the response to the results. It is a small change, without breaking the API. We can use the kwargs argument, similar to where_filter in the python class weaviate.py.
### Motivation
When comparing and understanding query results, using certainty is a good way.
### Your contribution
I like to contribute to the PR. As it would be my first contribution, I need to understand the integration tests and build the project, and I already tested the change in my local code sample. | https://github.com/langchain-ai/langchain/issues/5072 | https://github.com/langchain-ai/langchain/pull/5085 | 87bba2e8d3a7772a32eda45bc17160f4ad8ae3d2 | b95002289409077965d99636b15a45300d9c0b9d | "2023-05-21T22:37:40Z" | python | "2023-05-23T01:57:10Z" | langchain/vectorstores/weaviate.py | for i, text in enumerate(texts):
data_properties = {
text_key: text,
}
if metadatas is not None:
for key in metadatas[i].keys():
data_properties[key] = metadatas[i][key]
if "uuids" in kwargs:
_id = kwargs["uuids"][i]
else:
_id = get_valid_uuid(uuid4())
params = {
"uuid": _id,
"data_object": data_properties,
"class_name": index_name,
}
if embeddings is not None:
params["vector"] = embeddings[i]
batch.add_data_object(**params)
batch.flush()
return cls(
client, index_name, text_key, embedding=embedding, attributes=attributes
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,072 | Add option to use _additional fields while executing a Weaviate query | ### Feature request
Weaviate has the option to pass _additional field while executing a query
https://weaviate.io/developers/weaviate/api/graphql/additional-properties
It would be good to be able to use this feature and add the response to the results. It is a small change, without breaking the API. We can use the kwargs argument, similar to where_filter in the python class weaviate.py.
### Motivation
When comparing and understanding query results, using certainty is a good way.
### Your contribution
I like to contribute to the PR. As it would be my first contribution, I need to understand the integration tests and build the project, and I already tested the change in my local code sample. | https://github.com/langchain-ai/langchain/issues/5072 | https://github.com/langchain-ai/langchain/pull/5085 | 87bba2e8d3a7772a32eda45bc17160f4ad8ae3d2 | b95002289409077965d99636b15a45300d9c0b9d | "2023-05-21T22:37:40Z" | python | "2023-05-23T01:57:10Z" | tests/integration_tests/vectorstores/test_weaviate.py | """Test Weaviate functionality."""
import logging
import os
import uuid
from typing import Generator, Union
import pytest
from weaviate import Client
from langchain.docstore.document import Document
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores.weaviate import Weaviate
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
logging.basicConfig(level=logging.DEBUG)
"""
cd tests/integration_tests/vectorstores/docker-compose
docker compose -f weaviate.yml up
"""
class TestWeaviate:
@classmethod
def setup_class(cls) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,072 | Add option to use _additional fields while executing a Weaviate query | ### Feature request
Weaviate has the option to pass _additional field while executing a query
https://weaviate.io/developers/weaviate/api/graphql/additional-properties
It would be good to be able to use this feature and add the response to the results. It is a small change, without breaking the API. We can use the kwargs argument, similar to where_filter in the python class weaviate.py.
### Motivation
When comparing and understanding query results, using certainty is a good way.
### Your contribution
I like to contribute to the PR. As it would be my first contribution, I need to understand the integration tests and build the project, and I already tested the change in my local code sample. | https://github.com/langchain-ai/langchain/issues/5072 | https://github.com/langchain-ai/langchain/pull/5085 | 87bba2e8d3a7772a32eda45bc17160f4ad8ae3d2 | b95002289409077965d99636b15a45300d9c0b9d | "2023-05-21T22:37:40Z" | python | "2023-05-23T01:57:10Z" | tests/integration_tests/vectorstores/test_weaviate.py | if not os.getenv("OPENAI_API_KEY"):
raise ValueError("OPENAI_API_KEY environment variable is not set")
@pytest.fixture(scope="class", autouse=True)
def weaviate_url(self) -> Union[str, Generator[str, None, None]]:
"""Return the weaviate url."""
url = "http://localhost:8080"
yield url
client = Client(url)
client.schema.delete_all()
@pytest.mark.vcr(ignore_localhost=True)
def test_similarity_search_without_metadata(
self, weaviate_url: str, embedding_openai: OpenAIEmbeddings
) -> None:
"""Test end to end construction and search without metadata."""
texts = ["foo", "bar", "baz"]
docsearch = Weaviate.from_texts(
texts,
embedding_openai,
weaviate_url=weaviate_url,
)
output = docsearch.similarity_search("foo", k=1)
assert output == [Document(page_content="foo")]
@pytest.mark.vcr(ignore_localhost=True)
def test_similarity_search_with_metadata( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,072 | Add option to use _additional fields while executing a Weaviate query | ### Feature request
Weaviate has the option to pass _additional field while executing a query
https://weaviate.io/developers/weaviate/api/graphql/additional-properties
It would be good to be able to use this feature and add the response to the results. It is a small change, without breaking the API. We can use the kwargs argument, similar to where_filter in the python class weaviate.py.
### Motivation
When comparing and understanding query results, using certainty is a good way.
### Your contribution
I like to contribute to the PR. As it would be my first contribution, I need to understand the integration tests and build the project, and I already tested the change in my local code sample. | https://github.com/langchain-ai/langchain/issues/5072 | https://github.com/langchain-ai/langchain/pull/5085 | 87bba2e8d3a7772a32eda45bc17160f4ad8ae3d2 | b95002289409077965d99636b15a45300d9c0b9d | "2023-05-21T22:37:40Z" | python | "2023-05-23T01:57:10Z" | tests/integration_tests/vectorstores/test_weaviate.py | self, weaviate_url: str, embedding_openai: OpenAIEmbeddings
) -> None:
"""Test end to end construction and search with metadata."""
texts = ["foo", "bar", "baz"]
metadatas = [{"page": i} for i in range(len(texts))]
docsearch = Weaviate.from_texts(
texts, embedding_openai, metadatas=metadatas, weaviate_url=weaviate_url
)
output = docsearch.similarity_search("foo", k=1)
assert output == [Document(page_content="foo", metadata={"page": 0})]
@pytest.mark.vcr(ignore_localhost=True)
def test_similarity_search_with_metadata_and_filter(
self, weaviate_url: str, embedding_openai: OpenAIEmbeddings
) -> None:
"""Test end to end construction and search with metadata."""
texts = ["foo", "bar", "baz"]
metadatas = [{"page": i} for i in range(len(texts))]
docsearch = Weaviate.from_texts(
texts, embedding_openai, metadatas=metadatas, weaviate_url=weaviate_url
)
output = docsearch.similarity_search(
"foo",
k=2,
where_filter={"path": ["page"], "operator": "Equal", "valueNumber": 0},
)
assert output == [Document(page_content="foo", metadata={"page": 0})]
@pytest.mark.vcr(ignore_localhost=True)
def test_similarity_search_with_uuids( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,072 | Add option to use _additional fields while executing a Weaviate query | ### Feature request
Weaviate has the option to pass _additional field while executing a query
https://weaviate.io/developers/weaviate/api/graphql/additional-properties
It would be good to be able to use this feature and add the response to the results. It is a small change, without breaking the API. We can use the kwargs argument, similar to where_filter in the python class weaviate.py.
### Motivation
When comparing and understanding query results, using certainty is a good way.
### Your contribution
I like to contribute to the PR. As it would be my first contribution, I need to understand the integration tests and build the project, and I already tested the change in my local code sample. | https://github.com/langchain-ai/langchain/issues/5072 | https://github.com/langchain-ai/langchain/pull/5085 | 87bba2e8d3a7772a32eda45bc17160f4ad8ae3d2 | b95002289409077965d99636b15a45300d9c0b9d | "2023-05-21T22:37:40Z" | python | "2023-05-23T01:57:10Z" | tests/integration_tests/vectorstores/test_weaviate.py | self, weaviate_url: str, embedding_openai: OpenAIEmbeddings
) -> None:
"""Test end to end construction and search with uuids."""
texts = ["foo", "bar", "baz"]
uuids = [uuid.uuid5(uuid.NAMESPACE_DNS, "same-name") for text in texts]
metadatas = [{"page": i} for i in range(len(texts))]
docsearch = Weaviate.from_texts(
texts,
embedding_openai,
metadatas=metadatas,
weaviate_url=weaviate_url,
uuids=uuids,
)
output = docsearch.similarity_search("foo", k=2)
assert len(output) == 1 |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,072 | Add option to use _additional fields while executing a Weaviate query | ### Feature request
Weaviate has the option to pass _additional field while executing a query
https://weaviate.io/developers/weaviate/api/graphql/additional-properties
It would be good to be able to use this feature and add the response to the results. It is a small change, without breaking the API. We can use the kwargs argument, similar to where_filter in the python class weaviate.py.
### Motivation
When comparing and understanding query results, using certainty is a good way.
### Your contribution
I like to contribute to the PR. As it would be my first contribution, I need to understand the integration tests and build the project, and I already tested the change in my local code sample. | https://github.com/langchain-ai/langchain/issues/5072 | https://github.com/langchain-ai/langchain/pull/5085 | 87bba2e8d3a7772a32eda45bc17160f4ad8ae3d2 | b95002289409077965d99636b15a45300d9c0b9d | "2023-05-21T22:37:40Z" | python | "2023-05-23T01:57:10Z" | tests/integration_tests/vectorstores/test_weaviate.py | @pytest.mark.vcr(ignore_localhost=True)
def test_max_marginal_relevance_search(
self, weaviate_url: str, embedding_openai: OpenAIEmbeddings
) -> None:
"""Test end to end construction and MRR search."""
texts = ["foo", "bar", "baz"]
metadatas = [{"page": i} for i in range(len(texts))]
docsearch = Weaviate.from_texts(
texts, embedding_openai, metadatas=metadatas, weaviate_url=weaviate_url
)
standard_ranking = docsearch.similarity_search("foo", k=2)
output = docsearch.max_marginal_relevance_search(
"foo", k=2, fetch_k=3, lambda_mult=1.0
)
assert output == standard_ranking
output = docsearch.max_marginal_relevance_search(
"foo", k=2, fetch_k=3, lambda_mult=0.0
)
assert output == [
Document(page_content="foo", metadata={"page": 0}),
Document(page_content="bar", metadata={"page": 1}),
]
@pytest.mark.vcr(ignore_localhost=True)
def test_max_marginal_relevance_search_by_vector(
self, weaviate_url: str, embedding_openai: OpenAIEmbeddings
) -> None:
"""Test end to end construction and MRR search by vector."""
texts = ["foo", "bar", "baz"] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,072 | Add option to use _additional fields while executing a Weaviate query | ### Feature request
Weaviate has the option to pass _additional field while executing a query
https://weaviate.io/developers/weaviate/api/graphql/additional-properties
It would be good to be able to use this feature and add the response to the results. It is a small change, without breaking the API. We can use the kwargs argument, similar to where_filter in the python class weaviate.py.
### Motivation
When comparing and understanding query results, using certainty is a good way.
### Your contribution
I like to contribute to the PR. As it would be my first contribution, I need to understand the integration tests and build the project, and I already tested the change in my local code sample. | https://github.com/langchain-ai/langchain/issues/5072 | https://github.com/langchain-ai/langchain/pull/5085 | 87bba2e8d3a7772a32eda45bc17160f4ad8ae3d2 | b95002289409077965d99636b15a45300d9c0b9d | "2023-05-21T22:37:40Z" | python | "2023-05-23T01:57:10Z" | tests/integration_tests/vectorstores/test_weaviate.py | metadatas = [{"page": i} for i in range(len(texts))]
docsearch = Weaviate.from_texts(
texts, embedding_openai, metadatas=metadatas, weaviate_url=weaviate_url
)
foo_embedding = embedding_openai.embed_query("foo")
standard_ranking = docsearch.similarity_search("foo", k=2)
output = docsearch.max_marginal_relevance_search_by_vector(
foo_embedding, k=2, fetch_k=3, lambda_mult=1.0
)
assert output == standard_ranking
output = docsearch.max_marginal_relevance_search_by_vector(
foo_embedding, k=2, fetch_k=3, lambda_mult=0.0
)
assert output == [
Document(page_content="foo", metadata={"page": 0}),
Document(page_content="bar", metadata={"page": 1}),
]
@pytest.mark.vcr(ignore_localhost=True)
def test_max_marginal_relevance_search_with_filter(
self, weaviate_url: str, embedding_openai: OpenAIEmbeddings
) -> None:
"""Test end to end construction and MRR search."""
texts = ["foo", "bar", "baz"]
metadatas = [{"page": i} for i in range(len(texts))]
docsearch = Weaviate.from_texts(
texts, embedding_openai, metadatas=metadatas, weaviate_url=weaviate_url
)
where_filter = {"path": ["page"], "operator": "Equal", "valueNumber": 0} |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,072 | Add option to use _additional fields while executing a Weaviate query | ### Feature request
Weaviate has the option to pass _additional field while executing a query
https://weaviate.io/developers/weaviate/api/graphql/additional-properties
It would be good to be able to use this feature and add the response to the results. It is a small change, without breaking the API. We can use the kwargs argument, similar to where_filter in the python class weaviate.py.
### Motivation
When comparing and understanding query results, using certainty is a good way.
### Your contribution
I like to contribute to the PR. As it would be my first contribution, I need to understand the integration tests and build the project, and I already tested the change in my local code sample. | https://github.com/langchain-ai/langchain/issues/5072 | https://github.com/langchain-ai/langchain/pull/5085 | 87bba2e8d3a7772a32eda45bc17160f4ad8ae3d2 | b95002289409077965d99636b15a45300d9c0b9d | "2023-05-21T22:37:40Z" | python | "2023-05-23T01:57:10Z" | tests/integration_tests/vectorstores/test_weaviate.py | standard_ranking = docsearch.similarity_search(
"foo", k=2, where_filter=where_filter
)
output = docsearch.max_marginal_relevance_search(
"foo", k=2, fetch_k=3, lambda_mult=1.0, where_filter=where_filter
)
assert output == standard_ranking
output = docsearch.max_marginal_relevance_search(
"foo", k=2, fetch_k=3, lambda_mult=0.0, where_filter=where_filter
)
assert output == [
Document(page_content="foo", metadata={"page": 0}),
]
def test_add_texts_with_given_embedding(self, weaviate_url: str) -> None:
texts = ["foo", "bar", "baz"]
embedding = FakeEmbeddings()
docsearch = Weaviate.from_texts(
texts, embedding=embedding, weaviate_url=weaviate_url
)
docsearch.add_texts(["foo"])
output = docsearch.similarity_search_by_vector(
embedding.embed_query("foo"), k=2
)
assert output == [
Document(page_content="foo"),
Document(page_content="foo"),
]
def test_add_texts_with_given_uuids(self, weaviate_url: str) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,072 | Add option to use _additional fields while executing a Weaviate query | ### Feature request
Weaviate has the option to pass _additional field while executing a query
https://weaviate.io/developers/weaviate/api/graphql/additional-properties
It would be good to be able to use this feature and add the response to the results. It is a small change, without breaking the API. We can use the kwargs argument, similar to where_filter in the python class weaviate.py.
### Motivation
When comparing and understanding query results, using certainty is a good way.
### Your contribution
I like to contribute to the PR. As it would be my first contribution, I need to understand the integration tests and build the project, and I already tested the change in my local code sample. | https://github.com/langchain-ai/langchain/issues/5072 | https://github.com/langchain-ai/langchain/pull/5085 | 87bba2e8d3a7772a32eda45bc17160f4ad8ae3d2 | b95002289409077965d99636b15a45300d9c0b9d | "2023-05-21T22:37:40Z" | python | "2023-05-23T01:57:10Z" | tests/integration_tests/vectorstores/test_weaviate.py | texts = ["foo", "bar", "baz"]
embedding = FakeEmbeddings()
uuids = [uuid.uuid5(uuid.NAMESPACE_DNS, text) for text in texts]
docsearch = Weaviate.from_texts(
texts,
embedding=embedding,
weaviate_url=weaviate_url,
uuids=uuids,
)
docsearch.add_texts(["foo"], uuids=[uuids[0]])
output = docsearch.similarity_search_by_vector(
embedding.embed_query("foo"), k=2
)
assert output[0] == Document(page_content="foo")
assert output[1] != Document(page_content="foo") |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/callbacks/openai_info.py | """Callback Handler that prints to std out."""
from typing import Any, Dict, List, Optional, Union
from langchain.callbacks.base import BaseCallbackHandler
from langchain.schema import AgentAction, AgentFinish, LLMResult
MODEL_COST_PER_1K_TOKENS = {
"gpt-4": 0.03,
"gpt-4-0314": 0.03,
"gpt-4-completion": 0.06,
"gpt-4-0314-completion": 0.06,
"gpt-4-32k": 0.06,
"gpt-4-32k-0314": 0.06,
"gpt-4-32k-completion": 0.12,
"gpt-4-32k-0314-completion": 0.12,
"gpt-3.5-turbo": 0.002,
"gpt-3.5-turbo-0301": 0.002,
"text-ada-001": 0.0004,
"ada": 0.0004,
"text-babbage-001": 0.0005,
"babbage": 0.0005,
"text-curie-001": 0.002,
"curie": 0.002,
"text-davinci-003": 0.02,
"text-davinci-002": 0.02,
"code-davinci-002": 0.02,
}
def get_openai_token_cost_for_model( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/callbacks/openai_info.py | model_name: str, num_tokens: int, is_completion: bool = False
) -> float:
suffix = "-completion" if is_completion and model_name.startswith("gpt-4") else ""
model = model_name.lower() + suffix
if model not in MODEL_COST_PER_1K_TOKENS:
raise ValueError(
f"Unknown model: {model_name}. Please provide a valid OpenAI model name."
"Known models are: " + ", ".join(MODEL_COST_PER_1K_TOKENS.keys())
)
return MODEL_COST_PER_1K_TOKENS[model] * num_tokens / 1000
class OpenAICallbackHandler(BaseCallbackHandler):
"""Callback Handler that tracks OpenAI info."""
total_tokens: int = 0
prompt_tokens: int = 0
completion_tokens: int = 0
successful_requests: int = 0
total_cost: float = 0.0
def __repr__(self) -> str:
return (
f"Tokens Used: {self.total_tokens}\n"
f"\tPrompt Tokens: {self.prompt_tokens}\n"
f"\tCompletion Tokens: {self.completion_tokens}\n"
f"Successful Requests: {self.successful_requests}\n"
f"Total Cost (USD): ${self.total_cost}"
)
@property
def always_verbose(self) -> bool:
"""Whether to call verbose callbacks even if verbose is False."""
return True
def on_llm_start( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/callbacks/openai_info.py | self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> None:
"""Print out the prompts."""
pass
def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
"""Print out the token."""
pass
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
"""Collect token usage."""
if response.llm_output is None:
return None
self.successful_requests += 1
if "token_usage" not in response.llm_output:
return None
token_usage = response.llm_output["token_usage"]
completion_tokens = token_usage.get("completion_tokens", 0)
prompt_tokens = token_usage.get("prompt_tokens", 0)
model_name = response.llm_output.get("model_name")
if model_name and model_name in MODEL_COST_PER_1K_TOKENS:
completion_cost = get_openai_token_cost_for_model(
model_name, completion_tokens, is_completion=True
)
prompt_cost = get_openai_token_cost_for_model(model_name, prompt_tokens)
self.total_cost += prompt_cost + completion_cost
self.total_tokens += token_usage.get("total_tokens", 0)
self.prompt_tokens += prompt_tokens
self.completion_tokens += completion_tokens
def on_llm_error( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/callbacks/openai_info.py | self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> None:
"""Do nothing."""
pass
def on_chain_start(
self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any
) -> None:
"""Print out that we are entering a chain."""
pass
def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:
"""Print out that we finished a chain."""
pass
def on_chain_error(
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> None:
"""Do nothing."""
pass
def on_tool_start(
self,
serialized: Dict[str, Any],
input_str: str,
**kwargs: Any,
) -> None:
"""Print out the log in specified color."""
pass
def on_tool_end( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/callbacks/openai_info.py | self,
output: str,
color: Optional[str] = None,
observation_prefix: Optional[str] = None,
llm_prefix: Optional[str] = None,
**kwargs: Any,
) -> None:
"""If not the final action, print out observation."""
pass
def on_tool_error(
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> None:
"""Do nothing."""
pass
def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:
"""Run on agent action."""
pass
def on_agent_finish(
self, finish: AgentFinish, color: Optional[str] = None, **kwargs: Any
) -> None:
"""Run on agent end."""
pass
def __copy__(self) -> "OpenAICallbackHandler":
"""Return a copy of the callback handler."""
return self
def __deepcopy__(self, memo: Any) -> "OpenAICallbackHandler":
"""Return a deep copy of the callback handler."""
return self |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | """Wrapper around OpenAI APIs."""
from __future__ import annotations
import logging
import sys
import warnings
from typing import (
AbstractSet,
Any,
Callable,
Collection,
Dict,
Generator,
List,
Literal,
Mapping,
Optional,
Set,
Tuple,
Union,
)
from pydantic import Extra, Field, root_validator
from tenacity import (
before_sleep_log,
retry, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | retry_if_exception_type,
stop_after_attempt,
wait_exponential,
)
from langchain.callbacks.manager import (
AsyncCallbackManagerForLLMRun,
CallbackManagerForLLMRun,
)
from langchain.llms.base import BaseLLM
from langchain.schema import Generation, LLMResult
from langchain.utils import get_from_dict_or_env
logger = logging.getLogger(__name__)
def update_token_usage(
keys: Set[str], response: Dict[str, Any], token_usage: Dict[str, Any]
) -> None:
"""Update token usage."""
_keys_to_use = keys.intersection(response["usage"])
for _key in _keys_to_use:
if _key not in token_usage:
token_usage[_key] = response["usage"][_key]
else:
token_usage[_key] += response["usage"][_key]
def _update_response(response: Dict[str, Any], stream_response: Dict[str, Any]) -> None:
"""Update response from the stream response."""
response["choices"][0]["text"] += stream_response["choices"][0]["text"]
response["choices"][0]["finish_reason"] = stream_response["choices"][0][
"finish_reason"
]
response["choices"][0]["logprobs"] = stream_response["choices"][0]["logprobs"]
def _streaming_response_template() -> Dict[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | return {
"choices": [
{
"text": "",
"finish_reason": None,
"logprobs": None,
}
]
}
def _create_retry_decorator(llm: Union[BaseOpenAI, OpenAIChat]) -> Callable[[Any], Any]:
import openai
min_seconds = 4
max_seconds = 10
return retry(
reraise=True,
stop=stop_after_attempt(llm.max_retries),
wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),
retry=(
retry_if_exception_type(openai.error.Timeout)
| retry_if_exception_type(openai.error.APIError)
| retry_if_exception_type(openai.error.APIConnectionError)
| retry_if_exception_type(openai.error.RateLimitError)
| retry_if_exception_type(openai.error.ServiceUnavailableError)
),
before_sleep=before_sleep_log(logger, logging.WARNING),
)
def completion_with_retry(llm: Union[BaseOpenAI, OpenAIChat], **kwargs: Any) -> Any: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | """Use tenacity to retry the completion call."""
retry_decorator = _create_retry_decorator(llm)
@retry_decorator
def _completion_with_retry(**kwargs: Any) -> Any:
return llm.client.create(**kwargs)
return _completion_with_retry(**kwargs)
async def acompletion_with_retry(
llm: Union[BaseOpenAI, OpenAIChat], **kwargs: Any
) -> Any:
"""Use tenacity to retry the async completion call."""
retry_decorator = _create_retry_decorator(llm)
@retry_decorator
async def _completion_with_retry(**kwargs: Any) -> Any:
return await llm.client.acreate(**kwargs)
return await _completion_with_retry(**kwargs)
class BaseOpenAI(BaseLLM): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | """Wrapper around OpenAI large language models."""
client: Any
model_name: str = Field("text-davinci-003", alias="model")
"""Model name to use."""
temperature: float = 0.7
"""What sampling temperature to use."""
max_tokens: int = 256
"""The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size."""
top_p: float = 1
"""Total probability mass of tokens to consider at each step."""
frequency_penalty: float = 0
"""Penalizes repeated tokens according to frequency."""
presence_penalty: float = 0
"""Penalizes repeated tokens."""
n: int = 1
"""How many completions to generate for each prompt."""
best_of: int = 1
"""Generates best_of completions server-side and returns the "best"."""
model_kwargs: Dict[str, Any] = Field(default_factory=dict) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | """Holds any model parameters valid for `create` call not explicitly specified."""
openai_api_key: Optional[str] = None
openai_api_base: Optional[str] = None
openai_organization: Optional[str] = None
batch_size: int = 20
"""Batch size to use when passing multiple documents to generate."""
request_timeout: Optional[Union[float, Tuple[float, float]]] = None
"""Timeout for requests to OpenAI completion API. Default is 600 seconds."""
logit_bias: Optional[Dict[str, float]] = Field(default_factory=dict)
"""Adjust the probability of specific tokens being generated."""
max_retries: int = 6
"""Maximum number of retries to make when generating."""
streaming: bool = False
"""Whether to stream the results or not."""
allowed_special: Union[Literal["all"], AbstractSet[str]] = set()
"""Set of special tokens that are allowed。"""
disallowed_special: Union[Literal["all"], Collection[str]] = "all"
"""Set of special tokens that are not allowed。"""
def __new__(cls, **data: Any) -> Union[OpenAIChat, BaseOpenAI]: # ty
"""Initialize the OpenAI object."""
model_name = data.get("model_name", "")
if model_name.startswith("gpt-3.5-turbo") or model_name.startswith("gpt-4"):
warnings.warn(
"You are trying to use a chat model. This way of initializing it is "
"no longer supported. Instead, please use: "
"`from langchain.chat_models import ChatOpenAI`"
)
return OpenAIChat(**data)
return super().__new__(cls)
class Config: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | """Configuration for this pydantic object."""
extra = Extra.ignore
allow_population_by_field_name = True
@root_validator(pre=True)
def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:
"""Build extra kwargs from additional params that were passed in."""
all_required_field_names = cls.all_required_field_names()
extra = values.get("model_kwargs", {})
for field_name in list(values):
if field_name in extra:
raise ValueError(f"Found {field_name} supplied twice.")
if field_name not in all_required_field_names:
logger.warning(
f"""WARNING! {field_name} is not default parameter.
{field_name} was transferred to model_kwargs.
Please confirm that {field_name} is what you intended."""
)
extra[field_name] = values.pop(field_name)
invalid_model_kwargs = all_required_field_names.intersection(extra.keys())
if invalid_model_kwargs:
raise ValueError(
f"Parameters {invalid_model_kwargs} should be specified explicitly. "
f"Instead they were passed in as part of `model_kwargs` parameter."
)
values["model_kwargs"] = extra
return values
@root_validator()
def validate_environment(cls, values: Dict) -> Dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | """Validate that api key and python package exists in environment."""
openai_api_key = get_from_dict_or_env(
values, "openai_api_key", "OPENAI_API_KEY"
)
openai_api_base = get_from_dict_or_env(
values,
"openai_api_base",
"OPENAI_API_BASE",
default="",
)
openai_organization = get_from_dict_or_env(
values,
"openai_organization",
"OPENAI_ORGANIZATION",
default="",
)
try:
import openai
openai.api_key = openai_api_key
if openai_api_base:
openai.api_base = openai_api_base
if openai_organization:
openai.organization = openai_organization |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | values["client"] = openai.Completion
except ImportError:
raise ImportError(
"Could not import openai python package. "
"Please install it with `pip install openai`."
)
if values["streaming"] and values["n"] > 1:
raise ValueError("Cannot stream results when n > 1.")
if values["streaming"] and values["best_of"] > 1:
raise ValueError("Cannot stream results when best_of > 1.")
return values
@property
def _default_params(self) -> Dict[str, Any]:
"""Get the default parameters for calling OpenAI API."""
normal_params = {
"temperature": self.temperature,
"max_tokens": self.max_tokens,
"top_p": self.top_p,
"frequency_penalty": self.frequency_penalty,
"presence_penalty": self.presence_penalty,
"n": self.n,
"request_timeout": self.request_timeout,
"logit_bias": self.logit_bias,
}
# Az
# do
if self.best_of > 1:
normal_params["best_of"] = self.best_of
return {**normal_params, **self.model_kwargs}
def _generate( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | self,
prompts: List[str],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
) -> LLMResult:
"""Call out to OpenAI's endpoint with k unique prompts.
Args:
prompts: The prompts to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:
The full LLM output.
Example:
.. code-block:: python
response = openai.generate(["Tell me a joke."])
"""
# TO
params = self._invocation_params
sub_prompts = self.get_sub_prompts(params, prompts, stop) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | choices = []
token_usage: Dict[str, int] = {}
# Ge
# In
_keys = {"completion_tokens", "prompt_tokens", "total_tokens"}
for _prompts in sub_prompts:
if self.streaming:
if len(_prompts) > 1:
raise ValueError("Cannot stream results with multiple prompts.")
params["stream"] = True
response = _streaming_response_template()
for stream_resp in completion_with_retry(
self, prompt=_prompts, **params
):
if run_manager:
run_manager.on_llm_new_token(
stream_resp["choices"][0]["text"],
verbose=self.verbose,
logprobs=stream_resp["choices"][0]["logprobs"],
)
_update_response(response, stream_resp)
choices.extend(response["choices"])
else:
response = completion_with_retry(self, prompt=_prompts, **params)
choices.extend(response["choices"])
if not self.streaming:
# Ca
update_token_usage(_keys, response, token_usage)
return self.create_llm_result(choices, prompts, token_usage)
async def _agenerate( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | self,
prompts: List[str],
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
) -> LLMResult:
"""Call out to OpenAI's endpoint async with k unique prompts."""
params = self._invocation_params
sub_prompts = self.get_sub_prompts(params, prompts, stop)
choices = []
token_usage: Dict[str, int] = {}
# Ge
# In
_keys = {"completion_tokens", "prompt_tokens", "total_tokens"}
for _prompts in sub_prompts:
if self.streaming:
if len(_prompts) > 1:
raise ValueError("Cannot stream results with multiple prompts.")
params["stream"] = True
response = _streaming_response_template()
async for stream_resp in await acompletion_with_retry(
self, prompt=_prompts, **params
):
if run_manager:
await run_manager.on_llm_new_token(
stream_resp["choices"][0]["text"],
verbose=self.verbose,
logprobs=stream_resp["choices"][0]["logprobs"],
)
_update_response(response, stream_resp)
choices.extend(response["choices"]) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | else:
response = await acompletion_with_retry(self, prompt=_prompts, **params)
choices.extend(response["choices"])
if not self.streaming:
# Ca
update_token_usage(_keys, response, token_usage)
return self.create_llm_result(choices, prompts, token_usage)
def get_sub_prompts(
self,
params: Dict[str, Any],
prompts: List[str],
stop: Optional[List[str]] = None,
) -> List[List[str]]:
"""Get the sub prompts for llm call."""
if stop is not None:
if "stop" in params:
raise ValueError("`stop` found in both the input and default params.")
params["stop"] = stop
if params["max_tokens"] == -1:
if len(prompts) != 1:
raise ValueError(
"max_tokens set to -1 not supported for multiple inputs."
)
params["max_tokens"] = self.max_tokens_for_prompt(prompts[0])
sub_prompts = [
prompts[i : i + self.batch_size]
for i in range(0, len(prompts), self.batch_size)
]
return sub_prompts
def create_llm_result( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | self, choices: Any, prompts: List[str], token_usage: Dict[str, int]
) -> LLMResult:
"""Create the LLMResult from the choices and prompts."""
generations = []
for i, _ in enumerate(prompts):
sub_choices = choices[i * self.n : (i + 1) * self.n]
generations.append(
[
Generation(
text=choice["text"],
generation_info=dict(
finish_reason=choice.get("finish_reason"),
logprobs=choice.get("logprobs"),
),
)
for choice in sub_choices
]
)
llm_output = {"token_usage": token_usage, "model_name": self.model_name}
return LLMResult(generations=generations, llm_output=llm_output)
def stream(self, prompt: str, stop: Optional[List[str]] = None) -> Generator: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | """Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Args:
prompt: The prompts to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:
A generator representing the stream of tokens from OpenAI.
Example:
.. code-block:: python
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
"""
params = self.prep_streaming_params(stop)
generator = self.client.create(prompt=prompt, **params)
return generator
def prep_streaming_params(self, stop: Optional[List[str]] = None) -> Dict[str, Any]:
"""Prepare the params for streaming."""
params = self._invocation_params
if "best_of" in params and params["best_of"] != 1:
raise ValueError("OpenAI only supports best_of == 1 for streaming")
if stop is not None:
if "stop" in params:
raise ValueError("`stop` found in both the input and default params.")
params["stop"] = stop
params["stream"] = True
return params
@property
def _invocation_params(self) -> Dict[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | """Get the parameters used to invoke the model."""
return self._default_params
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {**{"model_name": self.model_name}, **self._default_params}
@property
def _llm_type(self) -> str:
"""Return type of llm."""
return "openai"
def get_token_ids(self, text: str) -> List[int]:
"""Get the token IDs using the tiktoken package."""
# ti
if sys.version_info[1] < 8:
return super().get_num_tokens(text)
try:
import tiktoken
except ImportError:
raise ImportError(
"Could not import tiktoken python package. "
"This is needed in order to calculate get_num_tokens. "
"Please install it with `pip install tiktoken`."
)
enc = tiktoken.encoding_for_model(self.model_name)
return enc.encode(
text,
allowed_special=self.allowed_special,
disallowed_special=self.disallowed_special,
)
def modelname_to_contextsize(self, modelname: str) -> int: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | """Calculate the maximum number of tokens possible to generate for a model.
Args:
modelname: The modelname we want to know the context size for.
Returns:
The maximum context size
Example:
.. code-block:: python
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
"""
model_token_mapping = {
"gpt-4": 8192,
"gpt-4-0314": 8192,
"gpt-4-32k": 32768,
"gpt-4-32k-0314": 32768,
"gpt-3.5-turbo": 4096,
"gpt-3.5-turbo-0301": 4096,
"text-ada-001": 2049,
"ada": 2049,
"text-babbage-001": 2040,
"babbage": 2049,
"text-curie-001": 2049,
"curie": 2049, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | "davinci": 2049,
"text-davinci-003": 4097,
"text-davinci-002": 4097,
"code-davinci-002": 8001,
"code-davinci-001": 8001,
"code-cushman-002": 2048,
"code-cushman-001": 2048,
}
context_size = model_token_mapping.get(modelname, None)
if context_size is None:
raise ValueError(
f"Unknown model: {modelname}. Please provide a valid OpenAI model name."
"Known models are: " + ", ".join(model_token_mapping.keys())
)
return context_size
def max_tokens_for_prompt(self, prompt: str) -> int:
"""Calculate the maximum number of tokens possible to generate for a prompt.
Args:
prompt: The prompt to pass into the model.
Returns:
The maximum number of tokens to generate for a prompt.
Example:
.. code-block:: python
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
"""
num_tokens = self.get_num_tokens(prompt)
# ge
max_size = self.modelname_to_contextsize(self.model_name)
return max_size - num_tokens
class OpenAI(BaseOpenAI): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | """Wrapper around OpenAI large language models.
To use, you should have the ``openai`` python package installed, and the
environment variable ``OPENAI_API_KEY`` set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example:
.. code-block:: python
from langchain.llms import OpenAI
openai = OpenAI(model_name="text-davinci-003")
"""
@property
def _invocation_params(self) -> Dict[str, Any]:
return {**{"model": self.model_name}, **super()._invocation_params}
class AzureOpenAI(BaseOpenAI): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | """Wrapper around Azure-specific OpenAI large language models.
To use, you should have the ``openai`` python package installed, and the
environment variable ``OPENAI_API_KEY`` set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example:
.. code-block:: python
from langchain.llms import AzureOpenAI
openai = AzureOpenAI(model_name="text-davinci-003")
"""
deployment_name: str = ""
"""Deployment name to use."""
@property
def _identifying_params(self) -> Mapping[str, Any]:
return {
**{"deployment_name": self.deployment_name},
**super()._identifying_params,
}
@property
def _invocation_params(self) -> Dict[str, Any]:
return {**{"engine": self.deployment_name}, **super()._invocation_params}
@property
def _llm_type(self) -> str:
"""Return type of llm."""
return "azure"
class OpenAIChat(BaseLLM): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | """Wrapper around OpenAI Chat large language models.
To use, you should have the ``openai`` python package installed, and the
environment variable ``OPENAI_API_KEY`` set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example:
.. code-block:: python
from langchain.llms import OpenAIChat
openaichat = OpenAIChat(model_name="gpt-3.5-turbo")
"""
client: Any
model_name: str = "gpt-3.5-turbo"
"""Model name to use."""
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
"""Holds any model parameters valid for `create` call not explicitly specified."""
openai_api_key: Optional[str] = None
openai_api_base: Optional[str] = None
max_retries: int = 6
"""Maximum number of retries to make when generating."""
prefix_messages: List = Field(default_factory=list)
"""Series of messages for Chat input."""
streaming: bool = False
"""Whether to stream the results or not."""
allowed_special: Union[Literal["all"], AbstractSet[str]] = set()
"""Set of special tokens that are allowed。"""
disallowed_special: Union[Literal["all"], Collection[str]] = "all"
"""Set of special tokens that are not allowed。"""
class Config: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | """Configuration for this pydantic object."""
extra = Extra.ignore
@root_validator(pre=True)
def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:
"""Build extra kwargs from additional params that were passed in."""
all_required_field_names = {field.alias for field in cls.__fields__.values()}
extra = values.get("model_kwargs", {})
for field_name in list(values):
if field_name not in all_required_field_names:
if field_name in extra:
raise ValueError(f"Found {field_name} supplied twice.")
extra[field_name] = values.pop(field_name)
values["model_kwargs"] = extra
return values
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
openai_api_key = get_from_dict_or_env(
values, "openai_api_key", "OPENAI_API_KEY"
)
openai_api_base = get_from_dict_or_env(
values,
"openai_api_base",
"OPENAI_API_BASE",
default="",
)
openai_organization = get_from_dict_or_env( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | values, "openai_organization", "OPENAI_ORGANIZATION", default=""
)
try:
import openai
openai.api_key = openai_api_key
if openai_api_base:
openai.api_base = openai_api_base
if openai_organization:
openai.organization = openai_organization
except ImportError:
raise ImportError(
"Could not import openai python package. "
"Please install it with `pip install openai`."
)
try:
values["client"] = openai.ChatCompletion
except AttributeError:
raise ValueError(
"`openai` has no `ChatCompletion` attribute, this is likely "
"due to an old version of the openai package. Try upgrading it "
"with `pip install --upgrade openai`."
)
warnings.warn(
"You are trying to use a chat model. This way of initializing it is "
"no longer supported. Instead, please use: "
"`from langchain.chat_models import ChatOpenAI`"
)
return values
@property
def _default_params(self) -> Dict[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | """Get the default parameters for calling OpenAI API."""
return self.model_kwargs
def _get_chat_params(
self, prompts: List[str], stop: Optional[List[str]] = None
) -> Tuple:
if len(prompts) > 1:
raise ValueError(
f"OpenAIChat currently only supports single prompt, got {prompts}"
)
messages = self.prefix_messages + [{"role": "user", "content": prompts[0]}]
params: Dict[str, Any] = {**{"model": self.model_name}, **self._default_params}
if stop is not None:
if "stop" in params:
raise ValueError("`stop` found in both the input and default params.")
params["stop"] = stop
if params.get("max_tokens") == -1:
# for Ch
del params["max_tokens"]
return messages, params
def _generate(
self,
prompts: List[str],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
) -> LLMResult:
messages, params = self._get_chat_params(prompts, stop)
if self.streaming:
response = ""
params["stream"] = True |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | for stream_resp in completion_with_retry(self, messages=messages, **params):
token = stream_resp["choices"][0]["delta"].get("content", "")
response += token
if run_manager:
run_manager.on_llm_new_token(
token,
)
return LLMResult(
generations=[[Generation(text=response)]],
)
else:
full_response = completion_with_retry(self, messages=messages, **params)
llm_output = {
"token_usage": full_response["usage"],
"model_name": self.model_name,
}
return LLMResult(
generations=[
[Generation(text=full_response["choices"][0]["message"]["content"])]
],
llm_output=llm_output,
)
async def _agenerate(
self,
prompts: List[str],
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
) -> LLMResult:
messages, params = self._get_chat_params(prompts, stop)
if self.streaming: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | response = ""
params["stream"] = True
async for stream_resp in await acompletion_with_retry(
self, messages=messages, **params
):
token = stream_resp["choices"][0]["delta"].get("content", "")
response += token
if run_manager:
await run_manager.on_llm_new_token(
token,
)
return LLMResult(
generations=[[Generation(text=response)]],
)
else:
full_response = await acompletion_with_retry(
self, messages=messages, **params
)
llm_output = {
"token_usage": full_response["usage"],
"model_name": self.model_name,
}
return LLMResult(
generations=[
[Generation(text=full_response["choices"][0]["message"]["content"])]
],
llm_output=llm_output,
)
@property
def _identifying_params(self) -> Mapping[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,887 | About fine tune model | I have fine tuned curie model of OPEN AI on sample text data and i used that model in
llm = OpenAI(
temperature=0.7,
openai_api_key='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',
model_name="curie:ft-personal-2023-03-31-05-59-15"#"text-davinci-003"#""#'' # can be used with llms like 'gpt-3.5-turbo'
)
after run the script i am getting an error
ValueError: Unknown model: curie:ft-personal-2023-03-31-05-59-15. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-completion, gpt-4-0314-completion, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-completion, gpt-4-32k-0314-completion, gpt-3.5-turbo, gpt-3.5-turbo-0301, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, text-davinci-003, text-davinci-002, code-davinci-002
i have give a correct name of fine tune model. what is the issue. can anyone help me to solve this? | https://github.com/langchain-ai/langchain/issues/2887 | https://github.com/langchain-ai/langchain/pull/5127 | 7a75bb21219b605cfd3cad30cc978eb9fb53c479 | 5002f3ae35070f2ba903bccb7b1028595e3c626a | "2023-04-14T10:54:55Z" | python | "2023-05-23T18:18:03Z" | langchain/llms/openai.py | """Get the identifying parameters."""
return {**{"model_name": self.model_name}, **self._default_params}
@property
def _llm_type(self) -> str:
"""Return type of llm."""
return "openai-chat"
def get_token_ids(self, text: str) -> List[int]:
"""Get the token IDs using the tiktoken package."""
# ti
if sys.version_info[1] < 8:
return super().get_token_ids(text)
try:
import tiktoken
except ImportError:
raise ImportError(
"Could not import tiktoken python package. "
"This is needed in order to calculate get_num_tokens. "
"Please install it with `pip install tiktoken`."
)
enc = tiktoken.encoding_for_model(self.model_name)
return enc.encode(
text,
allowed_special=self.allowed_special,
disallowed_special=self.disallowed_special,
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,104 | GoogleDriveLoader seems to be pulling trashed documents from the folder | ### System Info
Hi
testing this loader, it looks as tho this is pulling trashed files from folders. I think this should be default to false if anything and be an opt in.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
### Related Components
- [X] Document Loaders
### Reproduction
use GoogleDriveLoader
1. point to folder
2. move a file to trash in folder
Reindex
File still can be searched in vector store.
### Expected behavior
Should not be searchable | https://github.com/langchain-ai/langchain/issues/5104 | https://github.com/langchain-ai/langchain/pull/5220 | eff31a33613bcdc179d6ad22febbabf8dccf80c8 | f0ea093de867e5f099a4b5de2bfa24d788b79133 | "2023-05-22T21:21:14Z" | python | "2023-05-25T05:26:17Z" | langchain/document_loaders/googledrive.py | """Loader that loads data from Google Drive."""
from pathlib import Path
from typing import Any, Dict, List, Optional, Sequence, Union
from pydantic import BaseModel, root_validator, validator
from langchain.docstore.document import Document
from langchain.document_loaders.base import BaseLoader
SCOPES = ["https://www.googleapis.com/auth/drive.readonly"]
class GoogleDriveLoader(BaseLoader, BaseModel):
"""Loader that loads Google Docs from Google Drive."""
service_account_key: Path = Path.home() / ".credentials" / "keys.json"
credentials_path: Path = Path.home() / ".credentials" / "credentials.json"
token_path: Path = Path.home() / ".credentials" / "token.json"
folder_id: Optional[str] = None
document_ids: Optional[List[str]] = None
file_ids: Optional[List[str]] = None
recursive: bool = False
file_types: Optional[Sequence[str]] = None
@root_validator
def validate_inputs(cls, values: Dict[str, Any]) -> Dict[str, Any]: |
Subsets and Splits