status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
233
| body
stringlengths 0
186k
⌀ | issue_url
stringlengths 38
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown | updated_file
stringlengths 7
188
| chunk_content
stringlengths 1
1.03M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,786 | RetrievalQA.from_chain_type: callbacks are not called for all nested chains | ### System Info
langchain: 0.0.252
python: 3.10.12
@agola11
### Who can help?
@agola11 please take a look,
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a callback handler LogHandler for on_chain_start, on_chain_start, on_chat_model_start and log run_id, parent_run_id in each of them
2. Create a retrival chain and add this LogHandler
3. Add this LogHandler to llm as well
4. When running the chain, one of nested chain is not logged in between, because callbacks are not passed to that chain
### Expected behavior
All the nested chains should have callbacks defined.
| https://github.com/langchain-ai/langchain/issues/8786 | https://github.com/langchain-ai/langchain/pull/8787 | 5f1aab548731b53ebab00dd745a35ec7da52bf1c | 797c9e92c82f8e843b321ec2167bb1678ced03cf | "2023-08-05T06:43:10Z" | python | "2023-08-06T22:11:45Z" | libs/langchain/langchain/chains/question_answering/__init__.py | map_chain = LLMChain(
llm=llm,
prompt=_question_prompt,
verbose=verbose,
callback_manager=callback_manager,
callbacks=callbacks,
)
_reduce_llm = reduce_llm or llm
reduce_chain = LLMChain(
llm=_reduce_llm,
prompt=_combine_prompt,
verbose=verbose,
callback_manager=callback_manager,
callbacks=callbacks,
)
combine_documents_chain = StuffDocumentsChain(
llm_chain=reduce_chain,
document_variable_name=combine_document_variable_name,
verbose=verbose,
callback_manager=callback_manager,
callbacks=callbacks,
)
if collapse_prompt is None:
collapse_chain = None
if collapse_llm is not None:
raise ValueError(
"collapse_llm provided, but collapse_prompt was not: please "
"provide one or stop providing collapse_llm."
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,786 | RetrievalQA.from_chain_type: callbacks are not called for all nested chains | ### System Info
langchain: 0.0.252
python: 3.10.12
@agola11
### Who can help?
@agola11 please take a look,
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a callback handler LogHandler for on_chain_start, on_chain_start, on_chat_model_start and log run_id, parent_run_id in each of them
2. Create a retrival chain and add this LogHandler
3. Add this LogHandler to llm as well
4. When running the chain, one of nested chain is not logged in between, because callbacks are not passed to that chain
### Expected behavior
All the nested chains should have callbacks defined.
| https://github.com/langchain-ai/langchain/issues/8786 | https://github.com/langchain-ai/langchain/pull/8787 | 5f1aab548731b53ebab00dd745a35ec7da52bf1c | 797c9e92c82f8e843b321ec2167bb1678ced03cf | "2023-08-05T06:43:10Z" | python | "2023-08-06T22:11:45Z" | libs/langchain/langchain/chains/question_answering/__init__.py | else:
_collapse_llm = collapse_llm or llm
collapse_chain = StuffDocumentsChain(
llm_chain=LLMChain(
llm=_collapse_llm,
prompt=collapse_prompt,
verbose=verbose,
callback_manager=callback_manager,
callbacks=callbacks,
),
document_variable_name=combine_document_variable_name,
verbose=verbose,
callback_manager=callback_manager,
)
reduce_documents_chain = ReduceDocumentsChain(
combine_documents_chain=combine_documents_chain,
collapse_documents_chain=collapse_chain,
token_max=token_max,
verbose=verbose,
)
return MapReduceDocumentsChain(
llm_chain=map_chain,
document_variable_name=map_reduce_document_variable_name,
reduce_documents_chain=reduce_documents_chain,
verbose=verbose,
callback_manager=callback_manager,
callbacks=callbacks,
**kwargs,
)
def _load_refine_chain( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,786 | RetrievalQA.from_chain_type: callbacks are not called for all nested chains | ### System Info
langchain: 0.0.252
python: 3.10.12
@agola11
### Who can help?
@agola11 please take a look,
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a callback handler LogHandler for on_chain_start, on_chain_start, on_chat_model_start and log run_id, parent_run_id in each of them
2. Create a retrival chain and add this LogHandler
3. Add this LogHandler to llm as well
4. When running the chain, one of nested chain is not logged in between, because callbacks are not passed to that chain
### Expected behavior
All the nested chains should have callbacks defined.
| https://github.com/langchain-ai/langchain/issues/8786 | https://github.com/langchain-ai/langchain/pull/8787 | 5f1aab548731b53ebab00dd745a35ec7da52bf1c | 797c9e92c82f8e843b321ec2167bb1678ced03cf | "2023-08-05T06:43:10Z" | python | "2023-08-06T22:11:45Z" | libs/langchain/langchain/chains/question_answering/__init__.py | llm: BaseLanguageModel,
question_prompt: Optional[BasePromptTemplate] = None,
refine_prompt: Optional[BasePromptTemplate] = None,
document_variable_name: str = "context_str",
initial_response_name: str = "existing_answer",
refine_llm: Optional[BaseLanguageModel] = None,
verbose: Optional[bool] = None,
callback_manager: Optional[BaseCallbackManager] = None,
callbacks: Callbacks = None,
**kwargs: Any,
) -> RefineDocumentsChain:
_question_prompt = (
question_prompt or refine_prompts.QUESTION_PROMPT_SELECTOR.get_prompt(llm)
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,786 | RetrievalQA.from_chain_type: callbacks are not called for all nested chains | ### System Info
langchain: 0.0.252
python: 3.10.12
@agola11
### Who can help?
@agola11 please take a look,
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a callback handler LogHandler for on_chain_start, on_chain_start, on_chat_model_start and log run_id, parent_run_id in each of them
2. Create a retrival chain and add this LogHandler
3. Add this LogHandler to llm as well
4. When running the chain, one of nested chain is not logged in between, because callbacks are not passed to that chain
### Expected behavior
All the nested chains should have callbacks defined.
| https://github.com/langchain-ai/langchain/issues/8786 | https://github.com/langchain-ai/langchain/pull/8787 | 5f1aab548731b53ebab00dd745a35ec7da52bf1c | 797c9e92c82f8e843b321ec2167bb1678ced03cf | "2023-08-05T06:43:10Z" | python | "2023-08-06T22:11:45Z" | libs/langchain/langchain/chains/question_answering/__init__.py | _refine_prompt = refine_prompt or refine_prompts.REFINE_PROMPT_SELECTOR.get_prompt(
llm
)
initial_chain = LLMChain(
llm=llm,
prompt=_question_prompt,
verbose=verbose,
callback_manager=callback_manager,
callbacks=callbacks,
)
_refine_llm = refine_llm or llm
refine_chain = LLMChain(
llm=_refine_llm,
prompt=_refine_prompt,
verbose=verbose,
callback_manager=callback_manager,
callbacks=callbacks,
)
return RefineDocumentsChain(
initial_llm_chain=initial_chain,
refine_llm_chain=refine_chain,
document_variable_name=document_variable_name,
initial_response_name=initial_response_name,
verbose=verbose,
callback_manager=callback_manager,
**kwargs,
)
def load_qa_chain(
llm: BaseLanguageModel,
chain_type: str = "stuff", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,786 | RetrievalQA.from_chain_type: callbacks are not called for all nested chains | ### System Info
langchain: 0.0.252
python: 3.10.12
@agola11
### Who can help?
@agola11 please take a look,
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a callback handler LogHandler for on_chain_start, on_chain_start, on_chat_model_start and log run_id, parent_run_id in each of them
2. Create a retrival chain and add this LogHandler
3. Add this LogHandler to llm as well
4. When running the chain, one of nested chain is not logged in between, because callbacks are not passed to that chain
### Expected behavior
All the nested chains should have callbacks defined.
| https://github.com/langchain-ai/langchain/issues/8786 | https://github.com/langchain-ai/langchain/pull/8787 | 5f1aab548731b53ebab00dd745a35ec7da52bf1c | 797c9e92c82f8e843b321ec2167bb1678ced03cf | "2023-08-05T06:43:10Z" | python | "2023-08-06T22:11:45Z" | libs/langchain/langchain/chains/question_answering/__init__.py | verbose: Optional[bool] = None,
callback_manager: Optional[BaseCallbackManager] = None,
**kwargs: Any,
) -> BaseCombineDocumentsChain:
"""Load question answering chain.
Args:
llm: Language Model to use in the chain.
chain_type: Type of document combining chain to use. Should be one of "stuff",
"map_reduce", "map_rerank", and "refine".
verbose: Whether chains should be run in verbose mode or not. Note that this
applies to all chains that make up the final chain.
callback_manager: Callback manager to use for the chain.
Returns:
A chain to use for question answering.
"""
loader_mapping: Mapping[str, LoadingCallable] = {
"stuff": _load_stuff_chain,
"map_reduce": _load_map_reduce_chain,
"refine": _load_refine_chain,
"map_rerank": _load_map_rerank_chain,
}
if chain_type not in loader_mapping:
raise ValueError(
f"Got unsupported chain type: {chain_type}. "
f"Should be one of {loader_mapping.keys()}"
)
return loader_mapping[chain_type](
llm, verbose=verbose, callback_manager=callback_manager, **kwargs
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,616 | _get_python_function_name does not work with classes | ### System Info
LangChain : v0.0.231
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# Issue
`convert_to_openai_function` does not work as intended:
- Classes are not supported
- Any function without its source is not supported
# Reproduce
```python
from dataclasses import dataclass
from langchain.chains.openai_functions.base import (
convert_to_openai_function,
)
@dataclass
class System:
name: str
ram: int
convert_to_openai_function(System)
```
### Expected behavior
When calling `langchain.chains.openai_functions.base.convert_to_openai_function`, the subsequent call to `_get_python_function_name` fails because it tries to read source code (and cannot find it).
Something much simpler would be to access the `__name__` attribute of the callable.
| https://github.com/langchain-ai/langchain/issues/7616 | https://github.com/langchain-ai/langchain/pull/7617 | 797c9e92c82f8e843b321ec2167bb1678ced03cf | 4a7ebb7184fa5dad4cdfef49d1eab2a3e9029a2b | "2023-07-12T21:03:09Z" | python | "2023-08-06T22:12:03Z" | libs/langchain/langchain/chains/openai_functions/base.py | """Methods for creating chains that use OpenAI function-calling APIs."""
import inspect
import re
from typing import Any, Callable, Dict, List, Optional, Sequence, Tuple, Type, Union
from pydantic import BaseModel
from langchain.base_language import BaseLanguageModel
from langchain.chains import LLMChain
from langchain.output_parsers.openai_functions import (
JsonOutputFunctionsParser,
PydanticAttrOutputFunctionsParser,
PydanticOutputFunctionsParser,
)
from langchain.prompts import BasePromptTemplate
from langchain.schema import BaseLLMOutputParser
PYTHON_TO_JSON_TYPES = {
"str": "string",
"int": "number",
"float": "number",
"bool": "boolean",
}
def _get_python_function_name(function: Callable) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,616 | _get_python_function_name does not work with classes | ### System Info
LangChain : v0.0.231
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# Issue
`convert_to_openai_function` does not work as intended:
- Classes are not supported
- Any function without its source is not supported
# Reproduce
```python
from dataclasses import dataclass
from langchain.chains.openai_functions.base import (
convert_to_openai_function,
)
@dataclass
class System:
name: str
ram: int
convert_to_openai_function(System)
```
### Expected behavior
When calling `langchain.chains.openai_functions.base.convert_to_openai_function`, the subsequent call to `_get_python_function_name` fails because it tries to read source code (and cannot find it).
Something much simpler would be to access the `__name__` attribute of the callable.
| https://github.com/langchain-ai/langchain/issues/7616 | https://github.com/langchain-ai/langchain/pull/7617 | 797c9e92c82f8e843b321ec2167bb1678ced03cf | 4a7ebb7184fa5dad4cdfef49d1eab2a3e9029a2b | "2023-07-12T21:03:09Z" | python | "2023-08-06T22:12:03Z" | libs/langchain/langchain/chains/openai_functions/base.py | """Get the name of a Python function."""
source = inspect.getsource(function)
return re.search(r"^def (.*)\(", source).groups()[0]
def _parse_python_function_docstring(function: Callable) -> Tuple[str, dict]:
"""Parse the function and argument descriptions from the docstring of a function.
Assumes the function docstring follows Google Python style guide.
"""
docstring = inspect.getdoc(function)
if docstring:
docstring_blocks = docstring.split("\n\n")
descriptors = []
args_block = None
past_descriptors = False
for block in docstring_blocks:
if block.startswith("Args:"):
args_block = block
break
elif block.startswith("Returns:") or block.startswith("Example:"):
past_descriptors = True
elif not past_descriptors:
descriptors.append(block)
else:
continue
description = " ".join(descriptors)
else:
description = ""
args_block = None |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,616 | _get_python_function_name does not work with classes | ### System Info
LangChain : v0.0.231
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# Issue
`convert_to_openai_function` does not work as intended:
- Classes are not supported
- Any function without its source is not supported
# Reproduce
```python
from dataclasses import dataclass
from langchain.chains.openai_functions.base import (
convert_to_openai_function,
)
@dataclass
class System:
name: str
ram: int
convert_to_openai_function(System)
```
### Expected behavior
When calling `langchain.chains.openai_functions.base.convert_to_openai_function`, the subsequent call to `_get_python_function_name` fails because it tries to read source code (and cannot find it).
Something much simpler would be to access the `__name__` attribute of the callable.
| https://github.com/langchain-ai/langchain/issues/7616 | https://github.com/langchain-ai/langchain/pull/7617 | 797c9e92c82f8e843b321ec2167bb1678ced03cf | 4a7ebb7184fa5dad4cdfef49d1eab2a3e9029a2b | "2023-07-12T21:03:09Z" | python | "2023-08-06T22:12:03Z" | libs/langchain/langchain/chains/openai_functions/base.py | arg_descriptions = {}
if args_block:
arg = None
for line in args_block.split("\n")[1:]:
if ":" in line:
arg, desc = line.split(":")
arg_descriptions[arg.strip()] = desc.strip()
elif arg:
arg_descriptions[arg.strip()] += " " + line.strip()
return description, arg_descriptions
def _get_python_function_arguments(function: Callable, arg_descriptions: dict) -> dict:
"""Get JsonSchema describing a Python functions arguments.
Assumes all function arguments are of primitive types (int, float, str, bool) or
are subclasses of pydantic.BaseModel.
"""
properties = {}
annotations = inspect.getfullargspec(function).annotations
for arg, arg_type in annotations.items():
if arg == "return":
continue
if isinstance(arg_type, type) and issubclass(arg_type, BaseModel):
properties[arg] = arg_type.schema()
elif arg_type.__name__ in PYTHON_TO_JSON_TYPES:
properties[arg] = {"type": PYTHON_TO_JSON_TYPES[arg_type.__name__]}
if arg in arg_descriptions:
if arg not in properties:
properties[arg] = {}
properties[arg]["description"] = arg_descriptions[arg]
return properties
def _get_python_function_required_args(function: Callable) -> List[str]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,616 | _get_python_function_name does not work with classes | ### System Info
LangChain : v0.0.231
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# Issue
`convert_to_openai_function` does not work as intended:
- Classes are not supported
- Any function without its source is not supported
# Reproduce
```python
from dataclasses import dataclass
from langchain.chains.openai_functions.base import (
convert_to_openai_function,
)
@dataclass
class System:
name: str
ram: int
convert_to_openai_function(System)
```
### Expected behavior
When calling `langchain.chains.openai_functions.base.convert_to_openai_function`, the subsequent call to `_get_python_function_name` fails because it tries to read source code (and cannot find it).
Something much simpler would be to access the `__name__` attribute of the callable.
| https://github.com/langchain-ai/langchain/issues/7616 | https://github.com/langchain-ai/langchain/pull/7617 | 797c9e92c82f8e843b321ec2167bb1678ced03cf | 4a7ebb7184fa5dad4cdfef49d1eab2a3e9029a2b | "2023-07-12T21:03:09Z" | python | "2023-08-06T22:12:03Z" | libs/langchain/langchain/chains/openai_functions/base.py | """Get the required arguments for a Python function."""
spec = inspect.getfullargspec(function)
required = spec.args[: -len(spec.defaults)] if spec.defaults else spec.args
required += [k for k in spec.kwonlyargs if k not in (spec.kwonlydefaults or {})]
return required
def convert_python_function_to_openai_function(function: Callable) -> Dict[str, Any]:
"""Convert a Python function to an OpenAI function-calling API compatible dict.
Assumes the Python function has type hints and a docstring with a description. If
the docstring has Google Python style argument descriptions, these will be
included as well.
"""
description, arg_descriptions = _parse_python_function_docstring(function)
return {
"name": _get_python_function_name(function),
"description": description,
"parameters": {
"type": "object",
"properties": _get_python_function_arguments(function, arg_descriptions),
"required": _get_python_function_required_args(function),
},
}
def convert_to_openai_function( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,616 | _get_python_function_name does not work with classes | ### System Info
LangChain : v0.0.231
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# Issue
`convert_to_openai_function` does not work as intended:
- Classes are not supported
- Any function without its source is not supported
# Reproduce
```python
from dataclasses import dataclass
from langchain.chains.openai_functions.base import (
convert_to_openai_function,
)
@dataclass
class System:
name: str
ram: int
convert_to_openai_function(System)
```
### Expected behavior
When calling `langchain.chains.openai_functions.base.convert_to_openai_function`, the subsequent call to `_get_python_function_name` fails because it tries to read source code (and cannot find it).
Something much simpler would be to access the `__name__` attribute of the callable.
| https://github.com/langchain-ai/langchain/issues/7616 | https://github.com/langchain-ai/langchain/pull/7617 | 797c9e92c82f8e843b321ec2167bb1678ced03cf | 4a7ebb7184fa5dad4cdfef49d1eab2a3e9029a2b | "2023-07-12T21:03:09Z" | python | "2023-08-06T22:12:03Z" | libs/langchain/langchain/chains/openai_functions/base.py | function: Union[Dict[str, Any], Type[BaseModel], Callable]
) -> Dict[str, Any]:
"""Convert a raw function/class to an OpenAI function.
Args:
function: Either a dictionary, a pydantic.BaseModel class, or a Python function.
If a dictionary is passed in, it is assumed to already be a valid OpenAI
function.
Returns:
A dict version of the passed in function which is compatible with the
OpenAI function-calling API.
"""
if isinstance(function, dict):
return function
elif isinstance(function, type) and issubclass(function, BaseModel):
schema = function.schema()
return {
"name": schema["title"],
"description": schema["description"],
"parameters": schema,
}
elif callable(function):
return convert_python_function_to_openai_function(function)
else:
raise ValueError(
f"Unsupported function type {type(function)}. Functions must be passed in"
f" as Dict, pydantic.BaseModel, or Callable."
)
def _get_openai_output_parser( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,616 | _get_python_function_name does not work with classes | ### System Info
LangChain : v0.0.231
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# Issue
`convert_to_openai_function` does not work as intended:
- Classes are not supported
- Any function without its source is not supported
# Reproduce
```python
from dataclasses import dataclass
from langchain.chains.openai_functions.base import (
convert_to_openai_function,
)
@dataclass
class System:
name: str
ram: int
convert_to_openai_function(System)
```
### Expected behavior
When calling `langchain.chains.openai_functions.base.convert_to_openai_function`, the subsequent call to `_get_python_function_name` fails because it tries to read source code (and cannot find it).
Something much simpler would be to access the `__name__` attribute of the callable.
| https://github.com/langchain-ai/langchain/issues/7616 | https://github.com/langchain-ai/langchain/pull/7617 | 797c9e92c82f8e843b321ec2167bb1678ced03cf | 4a7ebb7184fa5dad4cdfef49d1eab2a3e9029a2b | "2023-07-12T21:03:09Z" | python | "2023-08-06T22:12:03Z" | libs/langchain/langchain/chains/openai_functions/base.py | functions: Sequence[Union[Dict[str, Any], Type[BaseModel], Callable]],
function_names: Sequence[str],
) -> BaseLLMOutputParser:
"""Get the appropriate function output parser given the user functions."""
if isinstance(functions[0], type) and issubclass(functions[0], BaseModel):
if len(functions) > 1:
pydantic_schema: Union[Dict, Type[BaseModel]] = {
name: fn for name, fn in zip(function_names, functions)
}
else:
pydantic_schema = functions[0]
output_parser: BaseLLMOutputParser = PydanticOutputFunctionsParser(
pydantic_schema=pydantic_schema
)
else:
output_parser = JsonOutputFunctionsParser(args_only=len(functions) <= 1)
return output_parser
def create_openai_fn_chain( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,616 | _get_python_function_name does not work with classes | ### System Info
LangChain : v0.0.231
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# Issue
`convert_to_openai_function` does not work as intended:
- Classes are not supported
- Any function without its source is not supported
# Reproduce
```python
from dataclasses import dataclass
from langchain.chains.openai_functions.base import (
convert_to_openai_function,
)
@dataclass
class System:
name: str
ram: int
convert_to_openai_function(System)
```
### Expected behavior
When calling `langchain.chains.openai_functions.base.convert_to_openai_function`, the subsequent call to `_get_python_function_name` fails because it tries to read source code (and cannot find it).
Something much simpler would be to access the `__name__` attribute of the callable.
| https://github.com/langchain-ai/langchain/issues/7616 | https://github.com/langchain-ai/langchain/pull/7617 | 797c9e92c82f8e843b321ec2167bb1678ced03cf | 4a7ebb7184fa5dad4cdfef49d1eab2a3e9029a2b | "2023-07-12T21:03:09Z" | python | "2023-08-06T22:12:03Z" | libs/langchain/langchain/chains/openai_functions/base.py | functions: Sequence[Union[Dict[str, Any], Type[BaseModel], Callable]],
llm: BaseLanguageModel,
prompt: BasePromptTemplate,
*,
output_parser: Optional[BaseLLMOutputParser] = None,
**kwargs: Any,
) -> LLMChain:
"""Create an LLM chain that uses OpenAI functions.
Args:
functions: A sequence of either dictionaries, pydantic.BaseModels classes, or
Python functions. If dictionaries are passed in, they are assumed to
already be a valid OpenAI functions. If only a single
function is passed in, then it will be enforced that the model use that
function. pydantic.BaseModels and Python functions should have docstrings
describing what the function does. For best results, pydantic.BaseModels
should have descriptions of the parameters and Python functions should have
Google Python style args descriptions in the docstring. Additionally,
Python functions should only use primitive types (str, int, float, bool) or
pydantic.BaseModels for arguments.
llm: Language model to use, assumed to support the OpenAI function-calling API. |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,616 | _get_python_function_name does not work with classes | ### System Info
LangChain : v0.0.231
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# Issue
`convert_to_openai_function` does not work as intended:
- Classes are not supported
- Any function without its source is not supported
# Reproduce
```python
from dataclasses import dataclass
from langchain.chains.openai_functions.base import (
convert_to_openai_function,
)
@dataclass
class System:
name: str
ram: int
convert_to_openai_function(System)
```
### Expected behavior
When calling `langchain.chains.openai_functions.base.convert_to_openai_function`, the subsequent call to `_get_python_function_name` fails because it tries to read source code (and cannot find it).
Something much simpler would be to access the `__name__` attribute of the callable.
| https://github.com/langchain-ai/langchain/issues/7616 | https://github.com/langchain-ai/langchain/pull/7617 | 797c9e92c82f8e843b321ec2167bb1678ced03cf | 4a7ebb7184fa5dad4cdfef49d1eab2a3e9029a2b | "2023-07-12T21:03:09Z" | python | "2023-08-06T22:12:03Z" | libs/langchain/langchain/chains/openai_functions/base.py | prompt: BasePromptTemplate to pass to the model.
output_parser: BaseLLMOutputParser to use for parsing model outputs. By default
will be inferred from the function types. If pydantic.BaseModels are passed
in, then the OutputParser will try to parse outputs using those. Otherwise
model outputs will simply be parsed as JSON. If multiple functions are
passed in and they are not pydantic.BaseModels, the chain output will
include both the name of the function that was returned and the arguments
to pass to the function.
Returns:
An LLMChain that will pass in the given functions to the model when run.
Example:
.. code-block:: python
from langchain.chains.openai_functions import create_openai_fn_chain
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate
from pydantic import BaseModel, Field
class RecordPerson(BaseModel):
\"\"\"Record some identifying information about a person.\"\"\"
name: str = Field(..., description="The person's name")
age: int = Field(..., description="The person's age")
fav_food: Optional[str] = Field(None, description="The person's favorite food")
class RecordDog(BaseModel):
\"\"\"Record some identifying information about a dog.\"\"\"
name: str = Field(..., description="The dog's name")
color: str = Field(..., description="The dog's color")
fav_food: Optional[str] = Field(None, description="The dog's favorite food")
llm = ChatOpenAI(model="gpt-3.5-turbo-0613", temperature=0)
prompt_msgs = [
SystemMessage(
content="You are a world class algorithm for recording entities" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,616 | _get_python_function_name does not work with classes | ### System Info
LangChain : v0.0.231
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# Issue
`convert_to_openai_function` does not work as intended:
- Classes are not supported
- Any function without its source is not supported
# Reproduce
```python
from dataclasses import dataclass
from langchain.chains.openai_functions.base import (
convert_to_openai_function,
)
@dataclass
class System:
name: str
ram: int
convert_to_openai_function(System)
```
### Expected behavior
When calling `langchain.chains.openai_functions.base.convert_to_openai_function`, the subsequent call to `_get_python_function_name` fails because it tries to read source code (and cannot find it).
Something much simpler would be to access the `__name__` attribute of the callable.
| https://github.com/langchain-ai/langchain/issues/7616 | https://github.com/langchain-ai/langchain/pull/7617 | 797c9e92c82f8e843b321ec2167bb1678ced03cf | 4a7ebb7184fa5dad4cdfef49d1eab2a3e9029a2b | "2023-07-12T21:03:09Z" | python | "2023-08-06T22:12:03Z" | libs/langchain/langchain/chains/openai_functions/base.py | ),
HumanMessage(content="Make calls to the relevant function to record the entities in the following input:"),
HumanMessagePromptTemplate.from_template("{input}"),
HumanMessage(content="Tips: Make sure to answer in the correct format"),
]
prompt = ChatPromptTemplate(messages=prompt_msgs)
chain = create_openai_fn_chain([RecordPerson, RecordDog])
chain.run("Harry was a chubby brown beagle who loved chicken")
# -> RecordDog(name="Harry", color="brown", fav_food="chicken")
"""
if not functions:
raise ValueError("Need to pass in at least one function. Received zero.")
openai_functions = [convert_to_openai_function(f) for f in functions]
fn_names = [oai_fn["name"] for oai_fn in openai_functions]
output_parser = output_parser or _get_openai_output_parser(functions, fn_names)
llm_kwargs: Dict[str, Any] = {
"functions": openai_functions,
}
if len(openai_functions) == 1:
llm_kwargs["function_call"] = {"name": openai_functions[0]["name"]}
llm_chain = LLMChain(
llm=llm,
prompt=prompt,
output_parser=output_parser,
llm_kwargs=llm_kwargs,
output_key="function",
**kwargs,
)
return llm_chain
def create_structured_output_chain( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,616 | _get_python_function_name does not work with classes | ### System Info
LangChain : v0.0.231
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# Issue
`convert_to_openai_function` does not work as intended:
- Classes are not supported
- Any function without its source is not supported
# Reproduce
```python
from dataclasses import dataclass
from langchain.chains.openai_functions.base import (
convert_to_openai_function,
)
@dataclass
class System:
name: str
ram: int
convert_to_openai_function(System)
```
### Expected behavior
When calling `langchain.chains.openai_functions.base.convert_to_openai_function`, the subsequent call to `_get_python_function_name` fails because it tries to read source code (and cannot find it).
Something much simpler would be to access the `__name__` attribute of the callable.
| https://github.com/langchain-ai/langchain/issues/7616 | https://github.com/langchain-ai/langchain/pull/7617 | 797c9e92c82f8e843b321ec2167bb1678ced03cf | 4a7ebb7184fa5dad4cdfef49d1eab2a3e9029a2b | "2023-07-12T21:03:09Z" | python | "2023-08-06T22:12:03Z" | libs/langchain/langchain/chains/openai_functions/base.py | output_schema: Union[Dict[str, Any], Type[BaseModel]],
llm: BaseLanguageModel,
prompt: BasePromptTemplate,
*,
output_parser: Optional[BaseLLMOutputParser] = None,
**kwargs: Any,
) -> LLMChain:
"""Create an LLMChain that uses an OpenAI function to get a structured output.
Args:
output_schema: Either a dictionary or pydantic.BaseModel class. If a dictionary
is passed in, it's assumed to already be a valid JsonSchema.
For best results, pydantic.BaseModels should have docstrings describing what
the schema represents and descriptions for the parameters.
llm: Language model to use, assumed to support the OpenAI function-calling API.
prompt: BasePromptTemplate to pass to the model.
output_parser: BaseLLMOutputParser to use for parsing model outputs. By default
will be inferred from the function types. If pydantic.BaseModels are passed
in, then the OutputParser will try to parse outputs using those. Otherwise
model outputs will simply be parsed as JSON.
Returns:
An LLMChain that will pass the given function to the model.
Example:
.. code-block:: python
from langchain.chains.openai_functions import create_structured_output_chain
from langchain.chat_models import ChatOpenAI
from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate
from pydantic import BaseModel, Field |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,616 | _get_python_function_name does not work with classes | ### System Info
LangChain : v0.0.231
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# Issue
`convert_to_openai_function` does not work as intended:
- Classes are not supported
- Any function without its source is not supported
# Reproduce
```python
from dataclasses import dataclass
from langchain.chains.openai_functions.base import (
convert_to_openai_function,
)
@dataclass
class System:
name: str
ram: int
convert_to_openai_function(System)
```
### Expected behavior
When calling `langchain.chains.openai_functions.base.convert_to_openai_function`, the subsequent call to `_get_python_function_name` fails because it tries to read source code (and cannot find it).
Something much simpler would be to access the `__name__` attribute of the callable.
| https://github.com/langchain-ai/langchain/issues/7616 | https://github.com/langchain-ai/langchain/pull/7617 | 797c9e92c82f8e843b321ec2167bb1678ced03cf | 4a7ebb7184fa5dad4cdfef49d1eab2a3e9029a2b | "2023-07-12T21:03:09Z" | python | "2023-08-06T22:12:03Z" | libs/langchain/langchain/chains/openai_functions/base.py | class Dog(BaseModel):
\"\"\"Identifying information about a dog.\"\"\"
name: str = Field(..., description="The dog's name")
color: str = Field(..., description="The dog's color")
fav_food: Optional[str] = Field(None, description="The dog's favorite food")
llm = ChatOpenAI(model="gpt-3.5-turbo-0613", temperature=0)
prompt_msgs = [
SystemMessage(
content="You are a world class algorithm for extracting information in structured formats."
),
HumanMessage(content="Use the given format to extract information from the following input:"),
HumanMessagePromptTemplate.from_template("{input}"),
HumanMessage(content="Tips: Make sure to answer in the correct format"),
]
prompt = ChatPromptTemplate(messages=prompt_msgs)
chain = create_structured_output_chain(Dog, llm, prompt)
chain.run("Harry was a chubby brown beagle who loved chicken")
# -> Dog(name="Harry", color="brown", fav_food="chicken")
"""
if isinstance(output_schema, dict):
function: Any = {
"name": "output_formatter",
"description": (
"Output formatter. Should always be used to format your response to the"
" user."
),
"parameters": output_schema,
}
else:
class _OutputFormatter(BaseModel): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,616 | _get_python_function_name does not work with classes | ### System Info
LangChain : v0.0.231
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# Issue
`convert_to_openai_function` does not work as intended:
- Classes are not supported
- Any function without its source is not supported
# Reproduce
```python
from dataclasses import dataclass
from langchain.chains.openai_functions.base import (
convert_to_openai_function,
)
@dataclass
class System:
name: str
ram: int
convert_to_openai_function(System)
```
### Expected behavior
When calling `langchain.chains.openai_functions.base.convert_to_openai_function`, the subsequent call to `_get_python_function_name` fails because it tries to read source code (and cannot find it).
Something much simpler would be to access the `__name__` attribute of the callable.
| https://github.com/langchain-ai/langchain/issues/7616 | https://github.com/langchain-ai/langchain/pull/7617 | 797c9e92c82f8e843b321ec2167bb1678ced03cf | 4a7ebb7184fa5dad4cdfef49d1eab2a3e9029a2b | "2023-07-12T21:03:09Z" | python | "2023-08-06T22:12:03Z" | libs/langchain/langchain/chains/openai_functions/base.py | """Output formatter. Should always be used to format your response to the user."""
output: output_schema
function = _OutputFormatter
output_parser = output_parser or PydanticAttrOutputFunctionsParser(
pydantic_schema=_OutputFormatter, attr_name="output"
)
return create_openai_fn_chain(
[function], llm, prompt, output_parser=output_parser, **kwargs
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,729 | VLLM | ### Feature request
can we please get vllm support for faster inference
### Motivation
faster inference speed compared to using hugging face pipeline
### Your contribution
n/a | https://github.com/langchain-ai/langchain/issues/8729 | https://github.com/langchain-ai/langchain/pull/8806 | 100d9ce4c7b55db0c9df973a26bbc18d5ad5800c | a616e19975796ff6e3cde24597ba90eee714d57a | "2023-08-04T00:45:38Z" | python | "2023-08-07T14:32:02Z" | libs/langchain/langchain/llms/__init__.py | """
**LLM** classes provide
access to the large language model (**LLM**) APIs and services.
**Class hierarchy:**
.. code-block::
BaseLanguageModel --> BaseLLM --> LLM --> <name> # Examples: AI21, HuggingFaceHub, OpenAI
**Main helpers:**
.. code-block::
LLMResult, PromptValue,
CallbackManagerForLLMRun, AsyncCallbackManagerForLLMRun,
CallbackManager, AsyncCallbackManager,
AIMessage, BaseMessage
"""
from typing import Dict, Type
from langchain.llms.ai21 import AI21 |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,729 | VLLM | ### Feature request
can we please get vllm support for faster inference
### Motivation
faster inference speed compared to using hugging face pipeline
### Your contribution
n/a | https://github.com/langchain-ai/langchain/issues/8729 | https://github.com/langchain-ai/langchain/pull/8806 | 100d9ce4c7b55db0c9df973a26bbc18d5ad5800c | a616e19975796ff6e3cde24597ba90eee714d57a | "2023-08-04T00:45:38Z" | python | "2023-08-07T14:32:02Z" | libs/langchain/langchain/llms/__init__.py | from langchain.llms.aleph_alpha import AlephAlpha
from langchain.llms.amazon_api_gateway import AmazonAPIGateway
from langchain.llms.anthropic import Anthropic
from langchain.llms.anyscale import Anyscale
from langchain.llms.aviary import Aviary
from langchain.llms.azureml_endpoint import AzureMLOnlineEndpoint
from langchain.llms.bananadev import Banana
from langchain.llms.base import BaseLLM
from langchain.llms.baseten import Baseten
from langchain.llms.beam import Beam
from langchain.llms.bedrock import Bedrock
from langchain.llms.cerebriumai import CerebriumAI
from langchain.llms.chatglm import ChatGLM
from langchain.llms.clarifai import Clarifai
from langchain.llms.cohere import Cohere
from langchain.llms.ctransformers import CTransformers
from langchain.llms.databricks import Databricks
from langchain.llms.deepinfra import DeepInfra
from langchain.llms.edenai import EdenAI
from langchain.llms.fake import FakeListLLM
from langchain.llms.fireworks import Fireworks, FireworksChat
from langchain.llms.forefrontai import ForefrontAI
from langchain.llms.google_palm import GooglePalm
from langchain.llms.gooseai import GooseAI
from langchain.llms.gpt4all import GPT4All
from langchain.llms.huggingface_endpoint import HuggingFaceEndpoint
from langchain.llms.huggingface_hub import HuggingFaceHub
from langchain.llms.huggingface_pipeline import HuggingFacePipeline
from langchain.llms.huggingface_text_gen_inference import HuggingFaceTextGenInference
from langchain.llms.human import HumanInputLLM |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,729 | VLLM | ### Feature request
can we please get vllm support for faster inference
### Motivation
faster inference speed compared to using hugging face pipeline
### Your contribution
n/a | https://github.com/langchain-ai/langchain/issues/8729 | https://github.com/langchain-ai/langchain/pull/8806 | 100d9ce4c7b55db0c9df973a26bbc18d5ad5800c | a616e19975796ff6e3cde24597ba90eee714d57a | "2023-08-04T00:45:38Z" | python | "2023-08-07T14:32:02Z" | libs/langchain/langchain/llms/__init__.py | from langchain.llms.koboldai import KoboldApiLLM
from langchain.llms.llamacpp import LlamaCpp
from langchain.llms.manifest import ManifestWrapper
from langchain.llms.minimax import Minimax
from langchain.llms.mlflow_ai_gateway import MlflowAIGateway
from langchain.llms.modal import Modal
from langchain.llms.mosaicml import MosaicML
from langchain.llms.nlpcloud import NLPCloud
from langchain.llms.octoai_endpoint import OctoAIEndpoint
from langchain.llms.openai import AzureOpenAI, OpenAI, OpenAIChat
from langchain.llms.openllm import OpenLLM
from langchain.llms.openlm import OpenLM
from langchain.llms.petals import Petals
from langchain.llms.pipelineai import PipelineAI
from langchain.llms.predibase import Predibase
from langchain.llms.predictionguard import PredictionGuard
from langchain.llms.promptlayer_openai import PromptLayerOpenAI, PromptLayerOpenAIChat
from langchain.llms.replicate import Replicate
from langchain.llms.rwkv import RWKV
from langchain.llms.sagemaker_endpoint import SagemakerEndpoint
from langchain.llms.self_hosted import SelfHostedPipeline
from langchain.llms.self_hosted_hugging_face import SelfHostedHuggingFaceLLM
from langchain.llms.stochasticai import StochasticAI
from langchain.llms.textgen import TextGen
from langchain.llms.tongyi import Tongyi
from langchain.llms.vertexai import VertexAI
from langchain.llms.writer import Writer
from langchain.llms.xinference import Xinference
__all__ = [
"AI21", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,729 | VLLM | ### Feature request
can we please get vllm support for faster inference
### Motivation
faster inference speed compared to using hugging face pipeline
### Your contribution
n/a | https://github.com/langchain-ai/langchain/issues/8729 | https://github.com/langchain-ai/langchain/pull/8806 | 100d9ce4c7b55db0c9df973a26bbc18d5ad5800c | a616e19975796ff6e3cde24597ba90eee714d57a | "2023-08-04T00:45:38Z" | python | "2023-08-07T14:32:02Z" | libs/langchain/langchain/llms/__init__.py | "AlephAlpha",
"AmazonAPIGateway",
"Anthropic",
"Anyscale",
"Aviary",
"AzureMLOnlineEndpoint",
"AzureOpenAI",
"Banana",
"Baseten",
"Beam",
"Bedrock",
"CTransformers",
"CerebriumAI",
"ChatGLM",
"Clarifai",
"Cohere",
"Databricks",
"DeepInfra",
"EdenAI",
"FakeListLLM",
"Fireworks",
"FireworksChat",
"ForefrontAI",
"GPT4All",
"GooglePalm",
"GooseAI",
"HuggingFaceEndpoint",
"HuggingFaceHub",
"HuggingFacePipeline",
"HuggingFaceTextGenInference", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,729 | VLLM | ### Feature request
can we please get vllm support for faster inference
### Motivation
faster inference speed compared to using hugging face pipeline
### Your contribution
n/a | https://github.com/langchain-ai/langchain/issues/8729 | https://github.com/langchain-ai/langchain/pull/8806 | 100d9ce4c7b55db0c9df973a26bbc18d5ad5800c | a616e19975796ff6e3cde24597ba90eee714d57a | "2023-08-04T00:45:38Z" | python | "2023-08-07T14:32:02Z" | libs/langchain/langchain/llms/__init__.py | "HumanInputLLM",
"KoboldApiLLM",
"LlamaCpp",
"TextGen",
"ManifestWrapper",
"Minimax",
"MlflowAIGateway",
"Modal",
"MosaicML",
"NLPCloud",
"OpenAI",
"OpenAIChat",
"OpenLLM",
"OpenLM",
"Petals",
"PipelineAI",
"Predibase",
"PredictionGuard",
"PromptLayerOpenAI",
"PromptLayerOpenAIChat",
"RWKV",
"Replicate",
"SagemakerEndpoint",
"SelfHostedHuggingFaceLLM",
"SelfHostedPipeline",
"StochasticAI",
"Tongyi",
"VertexAI",
"Writer",
"OctoAIEndpoint", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,729 | VLLM | ### Feature request
can we please get vllm support for faster inference
### Motivation
faster inference speed compared to using hugging face pipeline
### Your contribution
n/a | https://github.com/langchain-ai/langchain/issues/8729 | https://github.com/langchain-ai/langchain/pull/8806 | 100d9ce4c7b55db0c9df973a26bbc18d5ad5800c | a616e19975796ff6e3cde24597ba90eee714d57a | "2023-08-04T00:45:38Z" | python | "2023-08-07T14:32:02Z" | libs/langchain/langchain/llms/__init__.py | "Xinference",
]
type_to_cls_dict: Dict[str, Type[BaseLLM]] = {
"ai21": AI21,
"aleph_alpha": AlephAlpha,
"amazon_api_gateway": AmazonAPIGateway,
"amazon_bedrock": Bedrock,
"anthropic": Anthropic,
"anyscale": Anyscale,
"aviary": Aviary,
"azure": AzureOpenAI,
"azureml_endpoint": AzureMLOnlineEndpoint,
"bananadev": Banana,
"baseten": Baseten,
"beam": Beam,
"cerebriumai": CerebriumAI,
"chat_glm": ChatGLM,
"clarifai": Clarifai,
"cohere": Cohere,
"ctransformers": CTransformers,
"databricks": Databricks,
"deepinfra": DeepInfra,
"edenai": EdenAI,
"fake-list": FakeListLLM,
"forefrontai": ForefrontAI,
"google_palm": GooglePalm,
"gooseai": GooseAI,
"gpt4all": GPT4All,
"huggingface_endpoint": HuggingFaceEndpoint,
"huggingface_hub": HuggingFaceHub, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,729 | VLLM | ### Feature request
can we please get vllm support for faster inference
### Motivation
faster inference speed compared to using hugging face pipeline
### Your contribution
n/a | https://github.com/langchain-ai/langchain/issues/8729 | https://github.com/langchain-ai/langchain/pull/8806 | 100d9ce4c7b55db0c9df973a26bbc18d5ad5800c | a616e19975796ff6e3cde24597ba90eee714d57a | "2023-08-04T00:45:38Z" | python | "2023-08-07T14:32:02Z" | libs/langchain/langchain/llms/__init__.py | "huggingface_pipeline": HuggingFacePipeline,
"huggingface_textgen_inference": HuggingFaceTextGenInference,
"human-input": HumanInputLLM,
"koboldai": KoboldApiLLM,
"llamacpp": LlamaCpp,
"textgen": TextGen,
"minimax": Minimax,
"mlflow-ai-gateway": MlflowAIGateway,
"modal": Modal,
"mosaic": MosaicML,
"nlpcloud": NLPCloud,
"openai": OpenAI,
"openlm": OpenLM,
"petals": Petals,
"pipelineai": PipelineAI,
"predibase": Predibase,
"replicate": Replicate,
"rwkv": RWKV,
"sagemaker_endpoint": SagemakerEndpoint,
"self_hosted": SelfHostedPipeline,
"self_hosted_hugging_face": SelfHostedHuggingFaceLLM,
"stochasticai": StochasticAI,
"tongyi": Tongyi,
"vertexai": VertexAI,
"openllm": OpenLLM,
"openllm_client": OpenLLM,
"writer": Writer,
"xinference": Xinference,
} |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,989 | OutputFixingParser is not async | ### System Info
LangChain Python v0.0.237
Based on this code snippet it appears that OutputFixingParser doesn't support async flows.
https://github.com/hwchase17/langchain/blob/df84e1bb64d96377f909651f696f310c43c2f2c5/langchain/output_parsers/fix.py#L46-L52
It's calling the run function and not arun
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
1. Define async callback handler
2. Make LLM return output that is unparsable (invalid JSON or 2 code blocks)
3. OutputFixingParser will fail parsing the output and throw an exception, which will call the LLM via the run function which doesn't await on coroutines. Python will give the following error:
```
RuntimeWarning: coroutine 'AsyncCallbackHandler.on_chat_model_start' was never awaited
```
### Expected behavior
1. Should work with coroutines as expected | https://github.com/langchain-ai/langchain/issues/7989 | https://github.com/langchain-ai/langchain/pull/8776 | cc908d49a3c23e128fab7c89fa45d7cc4114f028 | 33cdb06b5c9d4d3e7f54d5e1e7c980dfae33923b | "2023-07-20T08:29:12Z" | python | "2023-08-07T21:42:48Z" | libs/langchain/langchain/output_parsers/fix.py | from __future__ import annotations
from typing import TypeVar
from langchain.chains.llm import LLMChain
from langchain.output_parsers.prompts import NAIVE_FIX_PROMPT
from langchain.schema import BaseOutputParser, BasePromptTemplate, OutputParserException
from langchain.schema.language_model import BaseLanguageModel
T = TypeVar("T")
class OutputFixingParser(BaseOutputParser[T]): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,989 | OutputFixingParser is not async | ### System Info
LangChain Python v0.0.237
Based on this code snippet it appears that OutputFixingParser doesn't support async flows.
https://github.com/hwchase17/langchain/blob/df84e1bb64d96377f909651f696f310c43c2f2c5/langchain/output_parsers/fix.py#L46-L52
It's calling the run function and not arun
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
1. Define async callback handler
2. Make LLM return output that is unparsable (invalid JSON or 2 code blocks)
3. OutputFixingParser will fail parsing the output and throw an exception, which will call the LLM via the run function which doesn't await on coroutines. Python will give the following error:
```
RuntimeWarning: coroutine 'AsyncCallbackHandler.on_chat_model_start' was never awaited
```
### Expected behavior
1. Should work with coroutines as expected | https://github.com/langchain-ai/langchain/issues/7989 | https://github.com/langchain-ai/langchain/pull/8776 | cc908d49a3c23e128fab7c89fa45d7cc4114f028 | 33cdb06b5c9d4d3e7f54d5e1e7c980dfae33923b | "2023-07-20T08:29:12Z" | python | "2023-08-07T21:42:48Z" | libs/langchain/langchain/output_parsers/fix.py | """Wraps a parser and tries to fix parsing errors."""
@property
def lc_serializable(self) -> bool:
return True
parser: BaseOutputParser[T]
retry_chain: LLMChain
@classmethod
def from_llm(
cls,
llm: BaseLanguageModel,
parser: BaseOutputParser[T],
prompt: BasePromptTemplate = NAIVE_FIX_PROMPT,
) -> OutputFixingParser[T]:
"""Create an OutputFixingParser from a language model and a parser.
Args:
llm: llm to use for fixing
parser: parser to use for parsing
prompt: prompt to use for fixing
Returns:
OutputFixingParser
"""
chain = LLMChain(llm=llm, prompt=prompt)
return cls(parser=parser, retry_chain=chain)
def parse(self, completion: str) -> T: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,989 | OutputFixingParser is not async | ### System Info
LangChain Python v0.0.237
Based on this code snippet it appears that OutputFixingParser doesn't support async flows.
https://github.com/hwchase17/langchain/blob/df84e1bb64d96377f909651f696f310c43c2f2c5/langchain/output_parsers/fix.py#L46-L52
It's calling the run function and not arun
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
1. Define async callback handler
2. Make LLM return output that is unparsable (invalid JSON or 2 code blocks)
3. OutputFixingParser will fail parsing the output and throw an exception, which will call the LLM via the run function which doesn't await on coroutines. Python will give the following error:
```
RuntimeWarning: coroutine 'AsyncCallbackHandler.on_chat_model_start' was never awaited
```
### Expected behavior
1. Should work with coroutines as expected | https://github.com/langchain-ai/langchain/issues/7989 | https://github.com/langchain-ai/langchain/pull/8776 | cc908d49a3c23e128fab7c89fa45d7cc4114f028 | 33cdb06b5c9d4d3e7f54d5e1e7c980dfae33923b | "2023-07-20T08:29:12Z" | python | "2023-08-07T21:42:48Z" | libs/langchain/langchain/output_parsers/fix.py | try:
parsed_completion = self.parser.parse(completion)
except OutputParserException as e:
new_completion = self.retry_chain.run(
instructions=self.parser.get_format_instructions(),
completion=completion,
error=repr(e),
)
parsed_completion = self.parser.parse(new_completion)
return parsed_completion
def get_format_instructions(self) -> str:
return self.parser.get_format_instructions()
@property
def _type(self) -> str:
return "output_fixing" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,989 | OutputFixingParser is not async | ### System Info
LangChain Python v0.0.237
Based on this code snippet it appears that OutputFixingParser doesn't support async flows.
https://github.com/hwchase17/langchain/blob/df84e1bb64d96377f909651f696f310c43c2f2c5/langchain/output_parsers/fix.py#L46-L52
It's calling the run function and not arun
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
1. Define async callback handler
2. Make LLM return output that is unparsable (invalid JSON or 2 code blocks)
3. OutputFixingParser will fail parsing the output and throw an exception, which will call the LLM via the run function which doesn't await on coroutines. Python will give the following error:
```
RuntimeWarning: coroutine 'AsyncCallbackHandler.on_chat_model_start' was never awaited
```
### Expected behavior
1. Should work with coroutines as expected | https://github.com/langchain-ai/langchain/issues/7989 | https://github.com/langchain-ai/langchain/pull/8776 | cc908d49a3c23e128fab7c89fa45d7cc4114f028 | 33cdb06b5c9d4d3e7f54d5e1e7c980dfae33923b | "2023-07-20T08:29:12Z" | python | "2023-08-07T21:42:48Z" | libs/langchain/langchain/output_parsers/retry.py | from __future__ import annotations
from typing import TypeVar
from langchain.chains.llm import LLMChain
from langchain.prompts.prompt import PromptTemplate
from langchain.schema import (
BaseOutputParser,
BasePromptTemplate,
OutputParserException,
PromptValue,
)
from langchain.schema.language_model import BaseLanguageModel
NAIVE_COMPLETION_RETRY = """Prompt:
{prompt}
Completion:
{completion}
Above, the Completion did not satisfy the constraints given in the Prompt.
Please try again:"""
NAIVE_COMPLETION_RETRY_WITH_ERROR = """Prompt:
{prompt}
Completion:
{completion}
Above, the Completion did not satisfy the constraints given in the Prompt.
Details: {error}
Please try again:"""
NAIVE_RETRY_PROMPT = PromptTemplate.from_template(NAIVE_COMPLETION_RETRY)
NAIVE_RETRY_WITH_ERROR_PROMPT = PromptTemplate.from_template(
NAIVE_COMPLETION_RETRY_WITH_ERROR
)
T = TypeVar("T")
class RetryOutputParser(BaseOutputParser[T]): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,989 | OutputFixingParser is not async | ### System Info
LangChain Python v0.0.237
Based on this code snippet it appears that OutputFixingParser doesn't support async flows.
https://github.com/hwchase17/langchain/blob/df84e1bb64d96377f909651f696f310c43c2f2c5/langchain/output_parsers/fix.py#L46-L52
It's calling the run function and not arun
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
1. Define async callback handler
2. Make LLM return output that is unparsable (invalid JSON or 2 code blocks)
3. OutputFixingParser will fail parsing the output and throw an exception, which will call the LLM via the run function which doesn't await on coroutines. Python will give the following error:
```
RuntimeWarning: coroutine 'AsyncCallbackHandler.on_chat_model_start' was never awaited
```
### Expected behavior
1. Should work with coroutines as expected | https://github.com/langchain-ai/langchain/issues/7989 | https://github.com/langchain-ai/langchain/pull/8776 | cc908d49a3c23e128fab7c89fa45d7cc4114f028 | 33cdb06b5c9d4d3e7f54d5e1e7c980dfae33923b | "2023-07-20T08:29:12Z" | python | "2023-08-07T21:42:48Z" | libs/langchain/langchain/output_parsers/retry.py | """Wraps a parser and tries to fix parsing errors.
Does this by passing the original prompt and the completion to another
LLM, and telling it the completion did not satisfy criteria in the prompt.
"""
parser: BaseOutputParser[T]
"""The parser to use to parse the output."""
retry_chain: LLMChain
"""The LLMChain to use to retry the completion."""
@classmethod
def from_llm(
cls,
llm: BaseLanguageModel,
parser: BaseOutputParser[T],
prompt: BasePromptTemplate = NAIVE_RETRY_PROMPT,
) -> RetryOutputParser[T]:
chain = LLMChain(llm=llm, prompt=prompt)
return cls(parser=parser, retry_chain=chain)
def parse_with_prompt(self, completion: str, prompt_value: PromptValue) -> T: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,989 | OutputFixingParser is not async | ### System Info
LangChain Python v0.0.237
Based on this code snippet it appears that OutputFixingParser doesn't support async flows.
https://github.com/hwchase17/langchain/blob/df84e1bb64d96377f909651f696f310c43c2f2c5/langchain/output_parsers/fix.py#L46-L52
It's calling the run function and not arun
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
1. Define async callback handler
2. Make LLM return output that is unparsable (invalid JSON or 2 code blocks)
3. OutputFixingParser will fail parsing the output and throw an exception, which will call the LLM via the run function which doesn't await on coroutines. Python will give the following error:
```
RuntimeWarning: coroutine 'AsyncCallbackHandler.on_chat_model_start' was never awaited
```
### Expected behavior
1. Should work with coroutines as expected | https://github.com/langchain-ai/langchain/issues/7989 | https://github.com/langchain-ai/langchain/pull/8776 | cc908d49a3c23e128fab7c89fa45d7cc4114f028 | 33cdb06b5c9d4d3e7f54d5e1e7c980dfae33923b | "2023-07-20T08:29:12Z" | python | "2023-08-07T21:42:48Z" | libs/langchain/langchain/output_parsers/retry.py | """Parse the output of an LLM call using a wrapped parser.
Args:
completion: The chain completion to parse.
prompt_value: The prompt to use to parse the completion.
Returns:
The parsed completion.
"""
try:
parsed_completion = self.parser.parse(completion)
except OutputParserException:
new_completion = self.retry_chain.run(
prompt=prompt_value.to_string(), completion=completion
)
parsed_completion = self.parser.parse(new_completion)
return parsed_completion
def parse(self, completion: str) -> T:
raise NotImplementedError(
"This OutputParser can only be called by the `parse_with_prompt` method."
)
def get_format_instructions(self) -> str:
return self.parser.get_format_instructions()
@property
def _type(self) -> str:
return "retry"
class RetryWithErrorOutputParser(BaseOutputParser[T]): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,989 | OutputFixingParser is not async | ### System Info
LangChain Python v0.0.237
Based on this code snippet it appears that OutputFixingParser doesn't support async flows.
https://github.com/hwchase17/langchain/blob/df84e1bb64d96377f909651f696f310c43c2f2c5/langchain/output_parsers/fix.py#L46-L52
It's calling the run function and not arun
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
1. Define async callback handler
2. Make LLM return output that is unparsable (invalid JSON or 2 code blocks)
3. OutputFixingParser will fail parsing the output and throw an exception, which will call the LLM via the run function which doesn't await on coroutines. Python will give the following error:
```
RuntimeWarning: coroutine 'AsyncCallbackHandler.on_chat_model_start' was never awaited
```
### Expected behavior
1. Should work with coroutines as expected | https://github.com/langchain-ai/langchain/issues/7989 | https://github.com/langchain-ai/langchain/pull/8776 | cc908d49a3c23e128fab7c89fa45d7cc4114f028 | 33cdb06b5c9d4d3e7f54d5e1e7c980dfae33923b | "2023-07-20T08:29:12Z" | python | "2023-08-07T21:42:48Z" | libs/langchain/langchain/output_parsers/retry.py | """Wraps a parser and tries to fix parsing errors.
Does this by passing the original prompt, the completion, AND the error
that was raised to another language model and telling it that the completion
did not work, and raised the given error. Differs from RetryOutputParser
in that this implementation provides the error that was raised back to the
LLM, which in theory should give it more information on how to fix it.
"""
parser: BaseOutputParser[T]
retry_chain: LLMChain
@classmethod
def from_llm(
cls,
llm: BaseLanguageModel,
parser: BaseOutputParser[T],
prompt: BasePromptTemplate = NAIVE_RETRY_WITH_ERROR_PROMPT,
) -> RetryWithErrorOutputParser[T]:
"""Create a RetryWithErrorOutputParser from an LLM.
Args:
llm: The LLM to use to retry the completion.
parser: The parser to use to parse the output.
prompt: The prompt to use to retry the completion.
Returns:
A RetryWithErrorOutputParser.
"""
chain = LLMChain(llm=llm, prompt=prompt)
return cls(parser=parser, retry_chain=chain)
def parse_with_prompt(self, completion: str, prompt_value: PromptValue) -> T: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,989 | OutputFixingParser is not async | ### System Info
LangChain Python v0.0.237
Based on this code snippet it appears that OutputFixingParser doesn't support async flows.
https://github.com/hwchase17/langchain/blob/df84e1bb64d96377f909651f696f310c43c2f2c5/langchain/output_parsers/fix.py#L46-L52
It's calling the run function and not arun
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
1. Define async callback handler
2. Make LLM return output that is unparsable (invalid JSON or 2 code blocks)
3. OutputFixingParser will fail parsing the output and throw an exception, which will call the LLM via the run function which doesn't await on coroutines. Python will give the following error:
```
RuntimeWarning: coroutine 'AsyncCallbackHandler.on_chat_model_start' was never awaited
```
### Expected behavior
1. Should work with coroutines as expected | https://github.com/langchain-ai/langchain/issues/7989 | https://github.com/langchain-ai/langchain/pull/8776 | cc908d49a3c23e128fab7c89fa45d7cc4114f028 | 33cdb06b5c9d4d3e7f54d5e1e7c980dfae33923b | "2023-07-20T08:29:12Z" | python | "2023-08-07T21:42:48Z" | libs/langchain/langchain/output_parsers/retry.py | try:
parsed_completion = self.parser.parse(completion)
except OutputParserException as e:
new_completion = self.retry_chain.run(
prompt=prompt_value.to_string(), completion=completion, error=repr(e)
)
parsed_completion = self.parser.parse(new_completion)
return parsed_completion
def parse(self, completion: str) -> T:
raise NotImplementedError(
"This OutputParser can only be called by the `parse_with_prompt` method."
)
def get_format_instructions(self) -> str:
return self.parser.get_format_instructions()
@property
def _type(self) -> str:
return "retry_with_error" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,046 | ArxivLoader incorrect results | ### System Info
Latest pip versions
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I tried searching by exact title in the following way:
```python
docs = ArxivLoader(query="MetaGPT: Meta Programming for Multi-Agent Collaborative Framework", load_max_docs=1).load()
```
But the result is incorrect. The search works properly on the arxiv website.
### Expected behavior
Correct paper returned | https://github.com/langchain-ai/langchain/issues/9046 | https://github.com/langchain-ai/langchain/pull/9061 | e94a5d753fe01aff1fa1592cd59d37fa64ef24a2 | fcbbddedaed196b0aa0377ca8c78b3410f62420f | "2023-08-10T15:18:24Z" | python | "2023-08-10T18:59:39Z" | libs/langchain/langchain/utilities/arxiv.py | """Util that calls Arxiv."""
import logging
import os
from typing import Any, Dict, List, Optional
from pydantic import BaseModel, root_validator
from langchain.schema import Document
logger = logging.getLogger(__name__)
class ArxivAPIWrapper(BaseModel): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,046 | ArxivLoader incorrect results | ### System Info
Latest pip versions
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I tried searching by exact title in the following way:
```python
docs = ArxivLoader(query="MetaGPT: Meta Programming for Multi-Agent Collaborative Framework", load_max_docs=1).load()
```
But the result is incorrect. The search works properly on the arxiv website.
### Expected behavior
Correct paper returned | https://github.com/langchain-ai/langchain/issues/9046 | https://github.com/langchain-ai/langchain/pull/9061 | e94a5d753fe01aff1fa1592cd59d37fa64ef24a2 | fcbbddedaed196b0aa0377ca8c78b3410f62420f | "2023-08-10T15:18:24Z" | python | "2023-08-10T18:59:39Z" | libs/langchain/langchain/utilities/arxiv.py | """Wrapper around ArxivAPI.
To use, you should have the ``arxiv`` python package installed.
https://lukasschwab.me/arxiv.py/index.html
This wrapper will use the Arxiv API to conduct searches and
fetch document summaries. By default, it will return the document summaries
of the top-k results.
It limits the Document content by doc_content_chars_max.
Set doc_content_chars_max=None if you don't want to limit the content size.
Attributes:
top_k_results: number of the top-scored document used for the arxiv tool
ARXIV_MAX_QUERY_LENGTH: the cut limit on the query used for the arxiv tool.
load_max_docs: a limit to the number of loaded documents
load_all_available_meta:
if True: the `metadata` of the loaded Documents contains all available
meta info (see https://lukasschwab.me/arxiv.py/index.html#Result),
if False: the `metadata` contains only the published date, title,
authors and summary.
doc_content_chars_max: an optional cut limit for the length of a document's
content
Example:
.. code-block:: python
from langchain.utilities.arxiv import ArxivAPIWrapper
arxiv = ArxivAPIWrapper(
top_k_results = 3,
ARXIV_MAX_QUERY_LENGTH = 300,
load_max_docs = 3,
load_all_available_meta = False, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,046 | ArxivLoader incorrect results | ### System Info
Latest pip versions
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I tried searching by exact title in the following way:
```python
docs = ArxivLoader(query="MetaGPT: Meta Programming for Multi-Agent Collaborative Framework", load_max_docs=1).load()
```
But the result is incorrect. The search works properly on the arxiv website.
### Expected behavior
Correct paper returned | https://github.com/langchain-ai/langchain/issues/9046 | https://github.com/langchain-ai/langchain/pull/9061 | e94a5d753fe01aff1fa1592cd59d37fa64ef24a2 | fcbbddedaed196b0aa0377ca8c78b3410f62420f | "2023-08-10T15:18:24Z" | python | "2023-08-10T18:59:39Z" | libs/langchain/langchain/utilities/arxiv.py | doc_content_chars_max = 40000
)
arxiv.run("tree of thought llm)
"""
arxiv_search: Any
arxiv_exceptions: Any
top_k_results: int = 3
ARXIV_MAX_QUERY_LENGTH = 300
load_max_docs: int = 100
load_all_available_meta: bool = False
doc_content_chars_max: Optional[int] = 4000
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that the python package exists in environment."""
try:
import arxiv
values["arxiv_search"] = arxiv.Search
values["arxiv_exceptions"] = (
arxiv.ArxivError,
arxiv.UnexpectedEmptyPageError,
arxiv.HTTPError,
)
values["arxiv_result"] = arxiv.Result
except ImportError:
raise ImportError(
"Could not import arxiv python package. "
"Please install it with `pip install arxiv`."
)
return values
def run(self, query: str) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,046 | ArxivLoader incorrect results | ### System Info
Latest pip versions
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I tried searching by exact title in the following way:
```python
docs = ArxivLoader(query="MetaGPT: Meta Programming for Multi-Agent Collaborative Framework", load_max_docs=1).load()
```
But the result is incorrect. The search works properly on the arxiv website.
### Expected behavior
Correct paper returned | https://github.com/langchain-ai/langchain/issues/9046 | https://github.com/langchain-ai/langchain/pull/9061 | e94a5d753fe01aff1fa1592cd59d37fa64ef24a2 | fcbbddedaed196b0aa0377ca8c78b3410f62420f | "2023-08-10T15:18:24Z" | python | "2023-08-10T18:59:39Z" | libs/langchain/langchain/utilities/arxiv.py | """
Performs an arxiv search and A single string
with the publish date, title, authors, and summary
for each article separated by two newlines.
If an error occurs or no documents found, error text
is returned instead. Wrapper for
https://lukasschwab.me/arxiv.py/index.html#Search
Args:
query: a plaintext search query
"""
try:
results = self.arxiv_search(
query[: self.ARXIV_MAX_QUERY_LENGTH], max_results=self.top_k_results
).results()
except self.arxiv_exceptions as ex:
return f"Arxiv exception: {ex}"
docs = [
f"Published: {result.updated.date()}\n"
f"Title: {result.title}\n"
f"Authors: {', '.join(a.name for a in result.authors)}\n"
f"Summary: {result.summary}"
for result in results
]
if docs:
return "\n\n".join(docs)[: self.doc_content_chars_max]
else:
return "No good Arxiv Result was found"
def load(self, query: str) -> List[Document]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,046 | ArxivLoader incorrect results | ### System Info
Latest pip versions
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I tried searching by exact title in the following way:
```python
docs = ArxivLoader(query="MetaGPT: Meta Programming for Multi-Agent Collaborative Framework", load_max_docs=1).load()
```
But the result is incorrect. The search works properly on the arxiv website.
### Expected behavior
Correct paper returned | https://github.com/langchain-ai/langchain/issues/9046 | https://github.com/langchain-ai/langchain/pull/9061 | e94a5d753fe01aff1fa1592cd59d37fa64ef24a2 | fcbbddedaed196b0aa0377ca8c78b3410f62420f | "2023-08-10T15:18:24Z" | python | "2023-08-10T18:59:39Z" | libs/langchain/langchain/utilities/arxiv.py | """
Run Arxiv search and get the article texts plus the article meta information.
See https://lukasschwab.me/arxiv.py/index.html#Search
Returns: a list of documents with the document.page_content in text format
Performs an arxiv search, downloads the top k results as PDFs, loads
them as Documents, and returns them in a List.
Args:
query: a plaintext search query
"""
try:
import fitz
except ImportError:
raise ImportError(
"PyMuPDF package not found, please install it with "
"`pip install pymupdf`"
)
try:
results = self.arxiv_search(
query[: self.ARXIV_MAX_QUERY_LENGTH], max_results=self.load_max_docs
).results()
except self.arxiv_exceptions as ex:
logger.debug("Error on arxiv: %s", ex)
return []
docs: List[Document] = []
for result in results:
try:
doc_file_name: str = result.download_pdf()
with fitz.open(doc_file_name) as doc_file:
text: str = "".join(page.get_text() for page in doc_file) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,046 | ArxivLoader incorrect results | ### System Info
Latest pip versions
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I tried searching by exact title in the following way:
```python
docs = ArxivLoader(query="MetaGPT: Meta Programming for Multi-Agent Collaborative Framework", load_max_docs=1).load()
```
But the result is incorrect. The search works properly on the arxiv website.
### Expected behavior
Correct paper returned | https://github.com/langchain-ai/langchain/issues/9046 | https://github.com/langchain-ai/langchain/pull/9061 | e94a5d753fe01aff1fa1592cd59d37fa64ef24a2 | fcbbddedaed196b0aa0377ca8c78b3410f62420f | "2023-08-10T15:18:24Z" | python | "2023-08-10T18:59:39Z" | libs/langchain/langchain/utilities/arxiv.py | except FileNotFoundError as f_ex:
logger.debug(f_ex)
continue
if self.load_all_available_meta:
extra_metadata = {
"entry_id": result.entry_id,
"published_first_time": str(result.published.date()),
"comment": result.comment,
"journal_ref": result.journal_ref,
"doi": result.doi,
"primary_category": result.primary_category,
"categories": result.categories,
"links": [link.href for link in result.links],
}
else:
extra_metadata = {}
metadata = {
"Published": str(result.updated.date()),
"Title": result.title,
"Authors": ", ".join(a.name for a in result.authors),
"Summary": result.summary,
**extra_metadata,
}
doc = Document(
page_content=text[: self.doc_content_chars_max], metadata=metadata
)
docs.append(doc)
os.remove(doc_file_name)
return docs |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,046 | ArxivLoader incorrect results | ### System Info
Latest pip versions
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I tried searching by exact title in the following way:
```python
docs = ArxivLoader(query="MetaGPT: Meta Programming for Multi-Agent Collaborative Framework", load_max_docs=1).load()
```
But the result is incorrect. The search works properly on the arxiv website.
### Expected behavior
Correct paper returned | https://github.com/langchain-ai/langchain/issues/9046 | https://github.com/langchain-ai/langchain/pull/9061 | e94a5d753fe01aff1fa1592cd59d37fa64ef24a2 | fcbbddedaed196b0aa0377ca8c78b3410f62420f | "2023-08-10T15:18:24Z" | python | "2023-08-10T18:59:39Z" | libs/langchain/tests/integration_tests/document_loaders/test_arxiv.py | from typing import List
from langchain.document_loaders.arxiv import ArxivLoader
from langchain.schema import Document
def assert_docs(docs: List[Document]) -> None:
for doc in docs:
assert doc.page_content
assert doc.metadata
assert set(doc.metadata) == {"Published", "Title", "Authors", "Summary"}
def test_load_success() -> None:
"""Test that returns one document"""
loader = ArxivLoader(query="1605.08386", load_max_docs=2)
docs = loader.load()
assert len(docs) == 1
print(docs[0].metadata)
print(docs[0].page_content)
assert_docs(docs)
def test_load_returns_no_result() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,046 | ArxivLoader incorrect results | ### System Info
Latest pip versions
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I tried searching by exact title in the following way:
```python
docs = ArxivLoader(query="MetaGPT: Meta Programming for Multi-Agent Collaborative Framework", load_max_docs=1).load()
```
But the result is incorrect. The search works properly on the arxiv website.
### Expected behavior
Correct paper returned | https://github.com/langchain-ai/langchain/issues/9046 | https://github.com/langchain-ai/langchain/pull/9061 | e94a5d753fe01aff1fa1592cd59d37fa64ef24a2 | fcbbddedaed196b0aa0377ca8c78b3410f62420f | "2023-08-10T15:18:24Z" | python | "2023-08-10T18:59:39Z" | libs/langchain/tests/integration_tests/document_loaders/test_arxiv.py | """Test that returns no docs"""
loader = ArxivLoader(query="1605.08386WWW", load_max_docs=2)
docs = loader.load()
assert len(docs) == 0
def test_load_returns_limited_docs() -> None:
"""Test that returns several docs"""
expected_docs = 2
loader = ArxivLoader(query="ChatGPT", load_max_docs=expected_docs)
docs = loader.load()
assert len(docs) == expected_docs
assert_docs(docs)
def test_load_returns_full_set_of_metadata() -> None:
"""Test that returns several docs"""
loader = ArxivLoader(query="ChatGPT", load_max_docs=1, load_all_available_meta=True)
docs = loader.load()
assert len(docs) == 1
for doc in docs:
assert doc.page_content
assert doc.metadata
assert set(doc.metadata).issuperset(
{"Published", "Title", "Authors", "Summary"}
)
print(doc.metadata)
assert len(set(doc.metadata)) > 4 |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,046 | ArxivLoader incorrect results | ### System Info
Latest pip versions
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I tried searching by exact title in the following way:
```python
docs = ArxivLoader(query="MetaGPT: Meta Programming for Multi-Agent Collaborative Framework", load_max_docs=1).load()
```
But the result is incorrect. The search works properly on the arxiv website.
### Expected behavior
Correct paper returned | https://github.com/langchain-ai/langchain/issues/9046 | https://github.com/langchain-ai/langchain/pull/9061 | e94a5d753fe01aff1fa1592cd59d37fa64ef24a2 | fcbbddedaed196b0aa0377ca8c78b3410f62420f | "2023-08-10T15:18:24Z" | python | "2023-08-10T18:59:39Z" | libs/langchain/tests/integration_tests/utilities/test_arxiv.py | """Integration test for Arxiv API Wrapper."""
from typing import Any, List
import pytest
from langchain.agents.load_tools import load_tools
from langchain.schema import Document
from langchain.tools.base import BaseTool
from langchain.utilities import ArxivAPIWrapper
@pytest.fixture
def api_client() -> ArxivAPIWrapper:
return ArxivAPIWrapper()
def test_run_success(api_client: ArxivAPIWrapper) -> None:
"""Test that returns the correct answer"""
output = api_client.run("1605.08386")
assert "Heat-bath random walks with Markov bases" in output
def test_run_returns_several_docs(api_client: ArxivAPIWrapper) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,046 | ArxivLoader incorrect results | ### System Info
Latest pip versions
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I tried searching by exact title in the following way:
```python
docs = ArxivLoader(query="MetaGPT: Meta Programming for Multi-Agent Collaborative Framework", load_max_docs=1).load()
```
But the result is incorrect. The search works properly on the arxiv website.
### Expected behavior
Correct paper returned | https://github.com/langchain-ai/langchain/issues/9046 | https://github.com/langchain-ai/langchain/pull/9061 | e94a5d753fe01aff1fa1592cd59d37fa64ef24a2 | fcbbddedaed196b0aa0377ca8c78b3410f62420f | "2023-08-10T15:18:24Z" | python | "2023-08-10T18:59:39Z" | libs/langchain/tests/integration_tests/utilities/test_arxiv.py | """Test that returns several docs"""
output = api_client.run("Caprice Stanley")
assert "On Mixing Behavior of a Family of Random Walks" in output
def test_run_returns_no_result(api_client: ArxivAPIWrapper) -> None:
"""Test that gives no result."""
output = api_client.run("1605.08386WWW")
assert "No good Arxiv Result was found" == output
def assert_docs(docs: List[Document]) -> None:
for doc in docs:
assert doc.page_content
assert doc.metadata
assert set(doc.metadata) == {"Published", "Title", "Authors", "Summary"}
def test_load_success(api_client: ArxivAPIWrapper) -> None:
"""Test that returns one document"""
docs = api_client.load("1605.08386")
assert len(docs) == 1
assert_docs(docs)
def test_load_returns_no_result(api_client: ArxivAPIWrapper) -> None:
"""Test that returns no docs"""
docs = api_client.load("1605.08386WWW")
assert len(docs) == 0
def test_load_returns_limited_docs() -> None:
"""Test that returns several docs"""
expected_docs = 2
api_client = ArxivAPIWrapper(load_max_docs=expected_docs)
docs = api_client.load("ChatGPT")
assert len(docs) == expected_docs
assert_docs(docs)
def test_load_returns_limited_doc_content_chars() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,046 | ArxivLoader incorrect results | ### System Info
Latest pip versions
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I tried searching by exact title in the following way:
```python
docs = ArxivLoader(query="MetaGPT: Meta Programming for Multi-Agent Collaborative Framework", load_max_docs=1).load()
```
But the result is incorrect. The search works properly on the arxiv website.
### Expected behavior
Correct paper returned | https://github.com/langchain-ai/langchain/issues/9046 | https://github.com/langchain-ai/langchain/pull/9061 | e94a5d753fe01aff1fa1592cd59d37fa64ef24a2 | fcbbddedaed196b0aa0377ca8c78b3410f62420f | "2023-08-10T15:18:24Z" | python | "2023-08-10T18:59:39Z" | libs/langchain/tests/integration_tests/utilities/test_arxiv.py | """Test that returns limited doc_content_chars_max"""
doc_content_chars_max = 100
api_client = ArxivAPIWrapper(doc_content_chars_max=doc_content_chars_max)
docs = api_client.load("1605.08386")
assert len(docs[0].page_content) == doc_content_chars_max
def test_load_returns_unlimited_doc_content_chars() -> None:
"""Test that returns unlimited doc_content_chars_max"""
doc_content_chars_max = None
api_client = ArxivAPIWrapper(doc_content_chars_max=doc_content_chars_max)
docs = api_client.load("1605.08386")
assert len(docs[0].page_content) == 54337
def test_load_returns_full_set_of_metadata() -> None:
"""Test that returns several docs"""
api_client = ArxivAPIWrapper(load_max_docs=1, load_all_available_meta=True)
docs = api_client.load("ChatGPT")
assert len(docs) == 1
for doc in docs:
assert doc.page_content
assert doc.metadata
assert set(doc.metadata).issuperset(
{"Published", "Title", "Authors", "Summary"}
)
print(doc.metadata)
assert len(set(doc.metadata)) > 4
def _load_arxiv_from_universal_entry(**kwargs: Any) -> BaseTool: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 9,046 | ArxivLoader incorrect results | ### System Info
Latest pip versions
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I tried searching by exact title in the following way:
```python
docs = ArxivLoader(query="MetaGPT: Meta Programming for Multi-Agent Collaborative Framework", load_max_docs=1).load()
```
But the result is incorrect. The search works properly on the arxiv website.
### Expected behavior
Correct paper returned | https://github.com/langchain-ai/langchain/issues/9046 | https://github.com/langchain-ai/langchain/pull/9061 | e94a5d753fe01aff1fa1592cd59d37fa64ef24a2 | fcbbddedaed196b0aa0377ca8c78b3410f62420f | "2023-08-10T15:18:24Z" | python | "2023-08-10T18:59:39Z" | libs/langchain/tests/integration_tests/utilities/test_arxiv.py | tools = load_tools(["arxiv"], **kwargs)
assert len(tools) == 1, "loaded more than 1 tool"
return tools[0]
def test_load_arxiv_from_universal_entry() -> None:
arxiv_tool = _load_arxiv_from_universal_entry()
output = arxiv_tool("Caprice Stanley")
assert (
"On Mixing Behavior of a Family of Random Walks" in output
), "failed to fetch a valid result"
def test_load_arxiv_from_universal_entry_with_params() -> None:
params = {
"top_k_results": 1,
"load_max_docs": 10,
"load_all_available_meta": True,
}
arxiv_tool = _load_arxiv_from_universal_entry(**params)
assert isinstance(arxiv_tool, ArxivAPIWrapper)
wp = arxiv_tool.api_wrapper
assert wp.top_k_results == 1, "failed to assert top_k_results"
assert wp.load_max_docs == 10, "failed to assert load_max_docs"
assert (
wp.load_all_available_meta is True
), "failed to assert load_all_available_meta" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 1,733 | list index out of range error if similarity search gives 0 docs | https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/vectorstores/milvus.py#L319
I'm using milvus where for my question I'm getting 0 documents and so index out of range error occurs
Error line:
https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/chains/llm.py#L95 | https://github.com/langchain-ai/langchain/issues/1733 | https://github.com/langchain-ai/langchain/pull/5769 | c0acbdca1b5884ac90d17908fb2bb555a9ed9909 | 2184e3a4005f5c48126523cce92930fca6a31760 | "2023-03-17T11:14:48Z" | python | "2023-08-11T05:50:39Z" | libs/langchain/langchain/chains/llm.py | """Chain that just formats a prompt and calls an LLM."""
from __future__ import annotations
import warnings
from typing import Any, Dict, List, Optional, Sequence, Tuple, Union
from pydantic import Extra, Field
from langchain.callbacks.manager import (
AsyncCallbackManager,
AsyncCallbackManagerForChainRun,
CallbackManager,
CallbackManagerForChainRun,
Callbacks,
)
from langchain.chains.base import Chain
from langchain.load.dump import dumpd
from langchain.prompts.prompt import PromptTemplate
from langchain.schema import (
BaseLLMOutputParser,
BasePromptTemplate,
LLMResult,
PromptValue,
StrOutputParser,
)
from langchain.schema.language_model import BaseLanguageModel
from langchain.utils.input import get_colored_text
class LLMChain(Chain): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 1,733 | list index out of range error if similarity search gives 0 docs | https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/vectorstores/milvus.py#L319
I'm using milvus where for my question I'm getting 0 documents and so index out of range error occurs
Error line:
https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/chains/llm.py#L95 | https://github.com/langchain-ai/langchain/issues/1733 | https://github.com/langchain-ai/langchain/pull/5769 | c0acbdca1b5884ac90d17908fb2bb555a9ed9909 | 2184e3a4005f5c48126523cce92930fca6a31760 | "2023-03-17T11:14:48Z" | python | "2023-08-11T05:50:39Z" | libs/langchain/langchain/chains/llm.py | """Chain to run queries against LLMs.
Example:
.. code-block:: python
from langchain import LLMChain, OpenAI, PromptTemplate
prompt_template = "Tell me a {adjective} joke"
prompt = PromptTemplate(
input_variables=["adjective"], template=prompt_template
)
llm = LLMChain(llm=OpenAI(), prompt=prompt)
"""
@property
def lc_serializable(self) -> bool:
return True
prompt: BasePromptTemplate
"""Prompt object to use."""
llm: BaseLanguageModel
"""Language model to call."""
output_key: str = "text"
output_parser: BaseLLMOutputParser = Field(default_factory=StrOutputParser)
"""Output parser to use.
Defaults to one that takes the most likely string but does not change it
otherwise."""
return_final_only: bool = True
"""Whether to return only the final parsed result. Defaults to True.
If false, will return a bunch of extra information about the generation."""
llm_kwargs: dict = Field(default_factory=dict)
class Config: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 1,733 | list index out of range error if similarity search gives 0 docs | https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/vectorstores/milvus.py#L319
I'm using milvus where for my question I'm getting 0 documents and so index out of range error occurs
Error line:
https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/chains/llm.py#L95 | https://github.com/langchain-ai/langchain/issues/1733 | https://github.com/langchain-ai/langchain/pull/5769 | c0acbdca1b5884ac90d17908fb2bb555a9ed9909 | 2184e3a4005f5c48126523cce92930fca6a31760 | "2023-03-17T11:14:48Z" | python | "2023-08-11T05:50:39Z" | libs/langchain/langchain/chains/llm.py | """Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
@property
def input_keys(self) -> List[str]:
"""Will be whatever keys the prompt expects.
:meta private:
"""
return self.prompt.input_variables
@property
def output_keys(self) -> List[str]:
"""Will always return text key.
:meta private:
"""
if self.return_final_only:
return [self.output_key]
else:
return [self.output_key, "full_generation"]
def _call(
self,
inputs: Dict[str, Any],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, str]:
response = self.generate([inputs], run_manager=run_manager)
return self.create_outputs(response)[0]
def generate( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 1,733 | list index out of range error if similarity search gives 0 docs | https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/vectorstores/milvus.py#L319
I'm using milvus where for my question I'm getting 0 documents and so index out of range error occurs
Error line:
https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/chains/llm.py#L95 | https://github.com/langchain-ai/langchain/issues/1733 | https://github.com/langchain-ai/langchain/pull/5769 | c0acbdca1b5884ac90d17908fb2bb555a9ed9909 | 2184e3a4005f5c48126523cce92930fca6a31760 | "2023-03-17T11:14:48Z" | python | "2023-08-11T05:50:39Z" | libs/langchain/langchain/chains/llm.py | self,
input_list: List[Dict[str, Any]],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> LLMResult:
"""Generate LLM result from inputs."""
prompts, stop = self.prep_prompts(input_list, run_manager=run_manager)
return self.llm.generate_prompt(
prompts,
stop,
callbacks=run_manager.get_child() if run_manager else None,
**self.llm_kwargs,
)
async def agenerate(
self,
input_list: List[Dict[str, Any]],
run_manager: Optional[AsyncCallbackManagerForChainRun] = None,
) -> LLMResult:
"""Generate LLM result from inputs."""
prompts, stop = await self.aprep_prompts(input_list, run_manager=run_manager)
return await self.llm.agenerate_prompt(
prompts,
stop,
callbacks=run_manager.get_child() if run_manager else None,
**self.llm_kwargs,
)
def prep_prompts( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 1,733 | list index out of range error if similarity search gives 0 docs | https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/vectorstores/milvus.py#L319
I'm using milvus where for my question I'm getting 0 documents and so index out of range error occurs
Error line:
https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/chains/llm.py#L95 | https://github.com/langchain-ai/langchain/issues/1733 | https://github.com/langchain-ai/langchain/pull/5769 | c0acbdca1b5884ac90d17908fb2bb555a9ed9909 | 2184e3a4005f5c48126523cce92930fca6a31760 | "2023-03-17T11:14:48Z" | python | "2023-08-11T05:50:39Z" | libs/langchain/langchain/chains/llm.py | self,
input_list: List[Dict[str, Any]],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Tuple[List[PromptValue], Optional[List[str]]]:
"""Prepare prompts from inputs."""
stop = None
if "stop" in input_list[0]:
stop = input_list[0]["stop"]
prompts = []
for inputs in input_list:
selected_inputs = {k: inputs[k] for k in self.prompt.input_variables}
prompt = self.prompt.format_prompt(**selected_inputs)
_colored_text = get_colored_text(prompt.to_string(), "green")
_text = "Prompt after formatting:\n" + _colored_text
if run_manager:
run_manager.on_text(_text, end="\n", verbose=self.verbose)
if "stop" in inputs and inputs["stop"] != stop:
raise ValueError(
"If `stop` is present in any inputs, should be present in all."
)
prompts.append(prompt)
return prompts, stop
async def aprep_prompts( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 1,733 | list index out of range error if similarity search gives 0 docs | https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/vectorstores/milvus.py#L319
I'm using milvus where for my question I'm getting 0 documents and so index out of range error occurs
Error line:
https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/chains/llm.py#L95 | https://github.com/langchain-ai/langchain/issues/1733 | https://github.com/langchain-ai/langchain/pull/5769 | c0acbdca1b5884ac90d17908fb2bb555a9ed9909 | 2184e3a4005f5c48126523cce92930fca6a31760 | "2023-03-17T11:14:48Z" | python | "2023-08-11T05:50:39Z" | libs/langchain/langchain/chains/llm.py | self,
input_list: List[Dict[str, Any]],
run_manager: Optional[AsyncCallbackManagerForChainRun] = None,
) -> Tuple[List[PromptValue], Optional[List[str]]]:
"""Prepare prompts from inputs."""
stop = None
if "stop" in input_list[0]:
stop = input_list[0]["stop"]
prompts = []
for inputs in input_list:
selected_inputs = {k: inputs[k] for k in self.prompt.input_variables}
prompt = self.prompt.format_prompt(**selected_inputs)
_colored_text = get_colored_text(prompt.to_string(), "green")
_text = "Prompt after formatting:\n" + _colored_text
if run_manager:
await run_manager.on_text(_text, end="\n", verbose=self.verbose)
if "stop" in inputs and inputs["stop"] != stop:
raise ValueError(
"If `stop` is present in any inputs, should be present in all."
)
prompts.append(prompt)
return prompts, stop
def apply( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 1,733 | list index out of range error if similarity search gives 0 docs | https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/vectorstores/milvus.py#L319
I'm using milvus where for my question I'm getting 0 documents and so index out of range error occurs
Error line:
https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/chains/llm.py#L95 | https://github.com/langchain-ai/langchain/issues/1733 | https://github.com/langchain-ai/langchain/pull/5769 | c0acbdca1b5884ac90d17908fb2bb555a9ed9909 | 2184e3a4005f5c48126523cce92930fca6a31760 | "2023-03-17T11:14:48Z" | python | "2023-08-11T05:50:39Z" | libs/langchain/langchain/chains/llm.py | self, input_list: List[Dict[str, Any]], callbacks: Callbacks = None
) -> List[Dict[str, str]]:
"""Utilize the LLM generate method for speed gains."""
callback_manager = CallbackManager.configure(
callbacks, self.callbacks, self.verbose
)
run_manager = callback_manager.on_chain_start(
dumpd(self),
{"input_list": input_list},
)
try:
response = self.generate(input_list, run_manager=run_manager)
except (KeyboardInterrupt, Exception) as e:
run_manager.on_chain_error(e)
raise e
outputs = self.create_outputs(response)
run_manager.on_chain_end({"outputs": outputs})
return outputs
async def aapply( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 1,733 | list index out of range error if similarity search gives 0 docs | https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/vectorstores/milvus.py#L319
I'm using milvus where for my question I'm getting 0 documents and so index out of range error occurs
Error line:
https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/chains/llm.py#L95 | https://github.com/langchain-ai/langchain/issues/1733 | https://github.com/langchain-ai/langchain/pull/5769 | c0acbdca1b5884ac90d17908fb2bb555a9ed9909 | 2184e3a4005f5c48126523cce92930fca6a31760 | "2023-03-17T11:14:48Z" | python | "2023-08-11T05:50:39Z" | libs/langchain/langchain/chains/llm.py | self, input_list: List[Dict[str, Any]], callbacks: Callbacks = None
) -> List[Dict[str, str]]:
"""Utilize the LLM generate method for speed gains."""
callback_manager = AsyncCallbackManager.configure(
callbacks, self.callbacks, self.verbose
)
run_manager = await callback_manager.on_chain_start(
dumpd(self),
{"input_list": input_list},
)
try:
response = await self.agenerate(input_list, run_manager=run_manager)
except (KeyboardInterrupt, Exception) as e:
await run_manager.on_chain_error(e)
raise e
outputs = self.create_outputs(response)
await run_manager.on_chain_end({"outputs": outputs})
return outputs
@property
def _run_output_key(self) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 1,733 | list index out of range error if similarity search gives 0 docs | https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/vectorstores/milvus.py#L319
I'm using milvus where for my question I'm getting 0 documents and so index out of range error occurs
Error line:
https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/chains/llm.py#L95 | https://github.com/langchain-ai/langchain/issues/1733 | https://github.com/langchain-ai/langchain/pull/5769 | c0acbdca1b5884ac90d17908fb2bb555a9ed9909 | 2184e3a4005f5c48126523cce92930fca6a31760 | "2023-03-17T11:14:48Z" | python | "2023-08-11T05:50:39Z" | libs/langchain/langchain/chains/llm.py | return self.output_key
def create_outputs(self, llm_result: LLMResult) -> List[Dict[str, Any]]:
"""Create outputs from response."""
result = [
{
self.output_key: self.output_parser.parse_result(generation),
"full_generation": generation,
}
for generation in llm_result.generations
]
if self.return_final_only:
result = [{self.output_key: r[self.output_key]} for r in result]
return result
async def _acall( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 1,733 | list index out of range error if similarity search gives 0 docs | https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/vectorstores/milvus.py#L319
I'm using milvus where for my question I'm getting 0 documents and so index out of range error occurs
Error line:
https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/chains/llm.py#L95 | https://github.com/langchain-ai/langchain/issues/1733 | https://github.com/langchain-ai/langchain/pull/5769 | c0acbdca1b5884ac90d17908fb2bb555a9ed9909 | 2184e3a4005f5c48126523cce92930fca6a31760 | "2023-03-17T11:14:48Z" | python | "2023-08-11T05:50:39Z" | libs/langchain/langchain/chains/llm.py | self,
inputs: Dict[str, Any],
run_manager: Optional[AsyncCallbackManagerForChainRun] = None,
) -> Dict[str, str]:
response = await self.agenerate([inputs], run_manager=run_manager)
return self.create_outputs(response)[0]
def predict(self, callbacks: Callbacks = None, **kwargs: Any) -> str:
"""Format prompt with kwargs and pass to LLM.
Args:
callbacks: Callbacks to pass to LLMChain
**kwargs: Keys to pass to prompt template.
Returns:
Completion from LLM.
Example:
.. code-block:: python
completion = llm.predict(adjective="funny")
"""
return self(kwargs, callbacks=callbacks)[self.output_key]
async def apredict(self, callbacks: Callbacks = None, **kwargs: Any) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 1,733 | list index out of range error if similarity search gives 0 docs | https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/vectorstores/milvus.py#L319
I'm using milvus where for my question I'm getting 0 documents and so index out of range error occurs
Error line:
https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/chains/llm.py#L95 | https://github.com/langchain-ai/langchain/issues/1733 | https://github.com/langchain-ai/langchain/pull/5769 | c0acbdca1b5884ac90d17908fb2bb555a9ed9909 | 2184e3a4005f5c48126523cce92930fca6a31760 | "2023-03-17T11:14:48Z" | python | "2023-08-11T05:50:39Z" | libs/langchain/langchain/chains/llm.py | """Format prompt with kwargs and pass to LLM.
Args:
callbacks: Callbacks to pass to LLMChain
**kwargs: Keys to pass to prompt template.
Returns:
Completion from LLM.
Example:
.. code-block:: python
completion = llm.predict(adjective="funny")
"""
return (await self.acall(kwargs, callbacks=callbacks))[self.output_key]
def predict_and_parse(
self, callbacks: Callbacks = None, **kwargs: Any
) -> Union[str, List[str], Dict[str, Any]]:
"""Call predict and then parse the results."""
warnings.warn(
"The predict_and_parse method is deprecated, "
"instead pass an output parser directly to LLMChain."
)
result = self.predict(callbacks=callbacks, **kwargs)
if self.prompt.output_parser is not None:
return self.prompt.output_parser.parse(result)
else:
return result
async def apredict_and_parse( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 1,733 | list index out of range error if similarity search gives 0 docs | https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/vectorstores/milvus.py#L319
I'm using milvus where for my question I'm getting 0 documents and so index out of range error occurs
Error line:
https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/chains/llm.py#L95 | https://github.com/langchain-ai/langchain/issues/1733 | https://github.com/langchain-ai/langchain/pull/5769 | c0acbdca1b5884ac90d17908fb2bb555a9ed9909 | 2184e3a4005f5c48126523cce92930fca6a31760 | "2023-03-17T11:14:48Z" | python | "2023-08-11T05:50:39Z" | libs/langchain/langchain/chains/llm.py | self, callbacks: Callbacks = None, **kwargs: Any
) -> Union[str, List[str], Dict[str, str]]:
"""Call apredict and then parse the results."""
warnings.warn(
"The apredict_and_parse method is deprecated, "
"instead pass an output parser directly to LLMChain."
)
result = await self.apredict(callbacks=callbacks, **kwargs)
if self.prompt.output_parser is not None:
return self.prompt.output_parser.parse(result)
else:
return result
def apply_and_parse(
self, input_list: List[Dict[str, Any]], callbacks: Callbacks = None
) -> Sequence[Union[str, List[str], Dict[str, str]]]:
"""Call apply and then parse the results."""
warnings.warn(
"The apply_and_parse method is deprecated, "
"instead pass an output parser directly to LLMChain."
)
result = self.apply(input_list, callbacks=callbacks)
return self._parse_generation(result)
def _parse_generation( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 1,733 | list index out of range error if similarity search gives 0 docs | https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/vectorstores/milvus.py#L319
I'm using milvus where for my question I'm getting 0 documents and so index out of range error occurs
Error line:
https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/chains/llm.py#L95 | https://github.com/langchain-ai/langchain/issues/1733 | https://github.com/langchain-ai/langchain/pull/5769 | c0acbdca1b5884ac90d17908fb2bb555a9ed9909 | 2184e3a4005f5c48126523cce92930fca6a31760 | "2023-03-17T11:14:48Z" | python | "2023-08-11T05:50:39Z" | libs/langchain/langchain/chains/llm.py | self, generation: List[Dict[str, str]]
) -> Sequence[Union[str, List[str], Dict[str, str]]]:
if self.prompt.output_parser is not None:
return [
self.prompt.output_parser.parse(res[self.output_key])
for res in generation
]
else:
return generation
async def aapply_and_parse(
self, input_list: List[Dict[str, Any]], callbacks: Callbacks = None
) -> Sequence[Union[str, List[str], Dict[str, str]]]:
"""Call apply and then parse the results."""
warnings.warn(
"The aapply_and_parse method is deprecated, "
"instead pass an output parser directly to LLMChain."
)
result = await self.aapply(input_list, callbacks=callbacks)
return self._parse_generation(result)
@property
def _chain_type(self) -> str:
return "llm_chain"
@classmethod
def from_string(cls, llm: BaseLanguageModel, template: str) -> LLMChain:
"""Create LLMChain from LLM and template."""
prompt_template = PromptTemplate.from_template(template)
return cls(llm=llm, prompt=prompt_template) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,001 | Azure OpenAI Embeddings failed due to no deployment_id set. | ### System Info
Broken by #4915
Error: `Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.embedding.Embedding'>`
I'm putting a PR out to fix this now.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run example notebook: https://github.com/hwchase17/langchain/blob/22d844dc0795e7e53a4cc499bf4974cb83df490d/docs/modules/models/text_embedding/examples/azureopenai.ipynb
### Expected behavior
Embedding using Azure OpenAI should work. | https://github.com/langchain-ai/langchain/issues/5001 | https://github.com/langchain-ai/langchain/pull/5002 | 45741bcc1b65e588e560b60e347ab391858d53f5 | 1d3735a84c64549d4ef338506ae0b68d53541b44 | "2023-05-19T20:18:47Z" | python | "2023-08-11T22:43:01Z" | libs/langchain/tests/integration_tests/embeddings/test_openai.py | """Test openai embeddings."""
import numpy as np
import openai
import pytest
from langchain.embeddings.openai import OpenAIEmbeddings
def test_openai_embedding_documents() -> None:
"""Test openai embeddings."""
documents = ["foo bar"]
embedding = OpenAIEmbeddings()
output = embedding.embed_documents(documents)
assert len(output) == 1
assert len(output[0]) == 1536
def test_openai_embedding_documents_multiple() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,001 | Azure OpenAI Embeddings failed due to no deployment_id set. | ### System Info
Broken by #4915
Error: `Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.embedding.Embedding'>`
I'm putting a PR out to fix this now.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run example notebook: https://github.com/hwchase17/langchain/blob/22d844dc0795e7e53a4cc499bf4974cb83df490d/docs/modules/models/text_embedding/examples/azureopenai.ipynb
### Expected behavior
Embedding using Azure OpenAI should work. | https://github.com/langchain-ai/langchain/issues/5001 | https://github.com/langchain-ai/langchain/pull/5002 | 45741bcc1b65e588e560b60e347ab391858d53f5 | 1d3735a84c64549d4ef338506ae0b68d53541b44 | "2023-05-19T20:18:47Z" | python | "2023-08-11T22:43:01Z" | libs/langchain/tests/integration_tests/embeddings/test_openai.py | """Test openai embeddings."""
documents = ["foo bar", "bar foo", "foo"]
embedding = OpenAIEmbeddings(chunk_size=2)
embedding.embedding_ctx_length = 8191
output = embedding.embed_documents(documents)
assert len(output) == 3
assert len(output[0]) == 1536
assert len(output[1]) == 1536
assert len(output[2]) == 1536
@pytest.mark.asyncio
async def test_openai_embedding_documents_async_multiple() -> None:
"""Test openai embeddings."""
documents = ["foo bar", "bar foo", "foo"]
embedding = OpenAIEmbeddings(chunk_size=2)
embedding.embedding_ctx_length = 8191
output = await embedding.aembed_documents(documents)
assert len(output) == 3
assert len(output[0]) == 1536
assert len(output[1]) == 1536
assert len(output[2]) == 1536
def test_openai_embedding_query() -> None:
"""Test openai embeddings."""
document = "foo bar"
embedding = OpenAIEmbeddings()
output = embedding.embed_query(document)
assert len(output) == 1536
@pytest.mark.asyncio
async def test_openai_embedding_async_query() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 5,001 | Azure OpenAI Embeddings failed due to no deployment_id set. | ### System Info
Broken by #4915
Error: `Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.embedding.Embedding'>`
I'm putting a PR out to fix this now.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run example notebook: https://github.com/hwchase17/langchain/blob/22d844dc0795e7e53a4cc499bf4974cb83df490d/docs/modules/models/text_embedding/examples/azureopenai.ipynb
### Expected behavior
Embedding using Azure OpenAI should work. | https://github.com/langchain-ai/langchain/issues/5001 | https://github.com/langchain-ai/langchain/pull/5002 | 45741bcc1b65e588e560b60e347ab391858d53f5 | 1d3735a84c64549d4ef338506ae0b68d53541b44 | "2023-05-19T20:18:47Z" | python | "2023-08-11T22:43:01Z" | libs/langchain/tests/integration_tests/embeddings/test_openai.py | """Test openai embeddings."""
document = "foo bar"
embedding = OpenAIEmbeddings()
output = await embedding.aembed_query(document)
assert len(output) == 1536
def test_openai_embedding_with_empty_string() -> None:
"""Test openai embeddings with empty string."""
document = ["", "abc"]
embedding = OpenAIEmbeddings()
output = embedding.embed_documents(document)
assert len(output) == 2
assert len(output[0]) == 1536
expected_output = openai.Embedding.create(input="", model="text-embedding-ada-002")[
"data"
][0]["embedding"]
assert np.allclose(output[0], expected_output)
assert len(output[1]) == 1536
def test_embed_documents_normalized() -> None:
output = OpenAIEmbeddings().embed_documents(["foo walked to the market"])
assert np.isclose(np.linalg.norm(output[0]), 1.0)
def test_embed_query_normalized() -> None:
output = OpenAIEmbeddings().embed_query("foo walked to the market")
assert np.isclose(np.linalg.norm(output), 1.0) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | "2023-07-31T21:01:43Z" | python | "2023-08-14T23:45:17Z" | libs/langchain/langchain/callbacks/base.py | """Base callback handler that can be used to handle callbacks in langchain."""
from __future__ import annotations
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Sequence, Union
from uuid import UUID
if TYPE_CHECKING:
from langchain.schema.agent import AgentAction, AgentFinish
from langchain.schema.document import Document
from langchain.schema.messages import BaseMessage
from langchain.schema.output import LLMResult
class RetrieverManagerMixin:
"""Mixin for Retriever callbacks."""
def on_retriever_error(
self,
error: Union[Exception, KeyboardInterrupt],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
**kwargs: Any,
) -> Any:
"""Run when Retriever errors."""
def on_retriever_end(
self,
documents: Sequence[Document],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
**kwargs: Any,
) -> Any:
"""Run when Retriever ends running."""
class LLMManagerMixin: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | "2023-07-31T21:01:43Z" | python | "2023-08-14T23:45:17Z" | libs/langchain/langchain/callbacks/base.py | """Mixin for LLM callbacks."""
def on_llm_new_token(
self,
token: str,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
**kwargs: Any,
) -> Any:
"""Run on new LLM token. Only available when streaming is enabled."""
def on_llm_end(
self,
response: LLMResult,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
**kwargs: Any,
) -> Any:
"""Run when LLM ends running."""
def on_llm_error(
self,
error: Union[Exception, KeyboardInterrupt],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
**kwargs: Any,
) -> Any:
"""Run when LLM errors."""
class ChainManagerMixin: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | "2023-07-31T21:01:43Z" | python | "2023-08-14T23:45:17Z" | libs/langchain/langchain/callbacks/base.py | """Mixin for chain callbacks."""
def on_chain_end(
self,
outputs: Dict[str, Any],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
**kwargs: Any,
) -> Any:
"""Run when chain ends running."""
def on_chain_error(
self,
error: Union[Exception, KeyboardInterrupt],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
**kwargs: Any,
) -> Any:
"""Run when chain errors."""
def on_agent_action(
self,
action: AgentAction,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
**kwargs: Any,
) -> Any:
"""Run on agent action."""
def on_agent_finish( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | "2023-07-31T21:01:43Z" | python | "2023-08-14T23:45:17Z" | libs/langchain/langchain/callbacks/base.py | self,
finish: AgentFinish,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
**kwargs: Any,
) -> Any:
"""Run on agent end."""
class ToolManagerMixin:
"""Mixin for tool callbacks."""
def on_tool_end(
self,
output: str,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
**kwargs: Any,
) -> Any:
"""Run when tool ends running."""
def on_tool_error(
self,
error: Union[Exception, KeyboardInterrupt],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
**kwargs: Any,
) -> Any:
"""Run when tool errors."""
class CallbackManagerMixin: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | "2023-07-31T21:01:43Z" | python | "2023-08-14T23:45:17Z" | libs/langchain/langchain/callbacks/base.py | """Mixin for callback manager."""
def on_llm_start(
self,
serialized: Dict[str, Any],
prompts: List[str],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> Any:
"""Run when LLM starts running."""
def on_chat_model_start( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | "2023-07-31T21:01:43Z" | python | "2023-08-14T23:45:17Z" | libs/langchain/langchain/callbacks/base.py | self,
serialized: Dict[str, Any],
messages: List[List[BaseMessage]],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> Any:
"""Run when a chat model starts running."""
raise NotImplementedError(
f"{self.__class__.__name__} does not implement `on_chat_model_start`"
)
def on_retriever_start(
self,
serialized: Dict[str, Any],
query: str,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> Any:
"""Run when Retriever starts running."""
def on_chain_start( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | "2023-07-31T21:01:43Z" | python | "2023-08-14T23:45:17Z" | libs/langchain/langchain/callbacks/base.py | self,
serialized: Dict[str, Any],
inputs: Dict[str, Any],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> Any:
"""Run when chain starts running."""
def on_tool_start(
self,
serialized: Dict[str, Any],
input_str: str,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> Any:
"""Run when tool starts running."""
class RunManagerMixin: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | "2023-07-31T21:01:43Z" | python | "2023-08-14T23:45:17Z" | libs/langchain/langchain/callbacks/base.py | """Mixin for run manager."""
def on_text(
self,
text: str,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
**kwargs: Any,
) -> Any:
"""Run on arbitrary text."""
class BaseCallbackHandler(
LLMManagerMixin,
ChainManagerMixin,
ToolManagerMixin,
RetrieverManagerMixin,
CallbackManagerMixin,
RunManagerMixin,
):
"""Base callback handler that can be used to handle callbacks from langchain."""
raise_error: bool = False
run_inline: bool = False
@property
def ignore_llm(self) -> bool: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | "2023-07-31T21:01:43Z" | python | "2023-08-14T23:45:17Z" | libs/langchain/langchain/callbacks/base.py | """Whether to ignore LLM callbacks."""
return False
@property
def ignore_retry(self) -> bool:
"""Whether to ignore retry callbacks."""
return False
@property
def ignore_chain(self) -> bool:
"""Whether to ignore chain callbacks."""
return False
@property
def ignore_agent(self) -> bool:
"""Whether to ignore agent callbacks."""
return False
@property
def ignore_retriever(self) -> bool:
"""Whether to ignore retriever callbacks."""
return False
@property
def ignore_chat_model(self) -> bool:
"""Whether to ignore chat model callbacks."""
return False
class AsyncCallbackHandler(BaseCallbackHandler): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | "2023-07-31T21:01:43Z" | python | "2023-08-14T23:45:17Z" | libs/langchain/langchain/callbacks/base.py | """Async callback handler that can be used to handle callbacks from langchain."""
async def on_llm_start(
self,
serialized: Dict[str, Any],
prompts: List[str],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> None:
"""Run when LLM starts running."""
async def on_chat_model_start(
self,
serialized: Dict[str, Any],
messages: List[List[BaseMessage]],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> Any:
"""Run when a chat model starts running."""
raise NotImplementedError(
f"{self.__class__.__name__} does not implement `on_chat_model_start`"
)
async def on_llm_new_token( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | "2023-07-31T21:01:43Z" | python | "2023-08-14T23:45:17Z" | libs/langchain/langchain/callbacks/base.py | self,
token: str,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
"""Run on new LLM token. Only available when streaming is enabled."""
async def on_llm_end(
self,
response: LLMResult,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
"""Run when LLM ends running."""
async def on_llm_error( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | "2023-07-31T21:01:43Z" | python | "2023-08-14T23:45:17Z" | libs/langchain/langchain/callbacks/base.py | self,
error: Union[Exception, KeyboardInterrupt],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
"""Run when LLM errors."""
async def on_chain_start(
self,
serialized: Dict[str, Any],
inputs: Dict[str, Any],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> None:
"""Run when chain starts running."""
async def on_chain_end( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | "2023-07-31T21:01:43Z" | python | "2023-08-14T23:45:17Z" | libs/langchain/langchain/callbacks/base.py | self,
outputs: Dict[str, Any],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
"""Run when chain ends running."""
async def on_chain_error(
self,
error: Union[Exception, KeyboardInterrupt],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
"""Run when chain errors."""
async def on_tool_start( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | "2023-07-31T21:01:43Z" | python | "2023-08-14T23:45:17Z" | libs/langchain/langchain/callbacks/base.py | self,
serialized: Dict[str, Any],
input_str: str,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> None:
"""Run when tool starts running."""
async def on_tool_end(
self,
output: str,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
"""Run when tool ends running."""
async def on_tool_error( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | "2023-07-31T21:01:43Z" | python | "2023-08-14T23:45:17Z" | libs/langchain/langchain/callbacks/base.py | self,
error: Union[Exception, KeyboardInterrupt],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
"""Run when tool errors."""
async def on_text(
self,
text: str,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
"""Run on arbitrary text."""
async def on_agent_action(
self,
action: AgentAction,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
"""Run on agent action."""
async def on_agent_finish( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | "2023-07-31T21:01:43Z" | python | "2023-08-14T23:45:17Z" | libs/langchain/langchain/callbacks/base.py | self,
finish: AgentFinish,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
"""Run on agent end."""
async def on_retriever_start(
self,
serialized: Dict[str, Any],
query: str,
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> None:
"""Run on retriever start."""
async def on_retriever_end( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | "2023-07-31T21:01:43Z" | python | "2023-08-14T23:45:17Z" | libs/langchain/langchain/callbacks/base.py | self,
documents: Sequence[Document],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
"""Run on retriever end."""
async def on_retriever_error(
self,
error: Union[Exception, KeyboardInterrupt],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
tags: Optional[List[str]] = None,
**kwargs: Any,
) -> None:
"""Run on retriever error."""
class BaseCallbackManager(CallbackManagerMixin): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | "2023-07-31T21:01:43Z" | python | "2023-08-14T23:45:17Z" | libs/langchain/langchain/callbacks/base.py | """Base callback manager that handles callbacks from LangChain."""
def __init__(
self,
handlers: List[BaseCallbackHandler],
inheritable_handlers: Optional[List[BaseCallbackHandler]] = None,
parent_run_id: Optional[UUID] = None,
*,
tags: Optional[List[str]] = None,
inheritable_tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
inheritable_metadata: Optional[Dict[str, Any]] = None,
) -> None:
"""Initialize callback manager."""
self.handlers: List[BaseCallbackHandler] = handlers
self.inheritable_handlers: List[BaseCallbackHandler] = (
inheritable_handlers or []
)
self.parent_run_id: Optional[UUID] = parent_run_id
self.tags = tags or []
self.inheritable_tags = inheritable_tags or []
self.metadata = metadata or {}
self.inheritable_metadata = inheritable_metadata or {}
@property
def is_async(self) -> bool: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | "2023-07-31T21:01:43Z" | python | "2023-08-14T23:45:17Z" | libs/langchain/langchain/callbacks/base.py | """Whether the callback manager is async."""
return False
def add_handler(self, handler: BaseCallbackHandler, inherit: bool = True) -> None:
"""Add a handler to the callback manager."""
if handler not in self.handlers:
self.handlers.append(handler)
if inherit and handler not in self.inheritable_handlers:
self.inheritable_handlers.append(handler)
def remove_handler(self, handler: BaseCallbackHandler) -> None:
"""Remove a handler from the callback manager."""
self.handlers.remove(handler)
self.inheritable_handlers.remove(handler)
def set_handlers(
self, handlers: List[BaseCallbackHandler], inherit: bool = True
) -> None:
"""Set handlers as the only handlers on the callback manager."""
self.handlers = []
self.inheritable_handlers = []
for handler in handlers:
self.add_handler(handler, inherit=inherit)
def set_handler(self, handler: BaseCallbackHandler, inherit: bool = True) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | "2023-07-31T21:01:43Z" | python | "2023-08-14T23:45:17Z" | libs/langchain/langchain/callbacks/base.py | """Set handler as the only handler on the callback manager."""
self.set_handlers([handler], inherit=inherit)
def add_tags(self, tags: List[str], inherit: bool = True) -> None:
for tag in tags:
if tag in self.tags:
self.remove_tags([tag])
self.tags.extend(tags)
if inherit:
self.inheritable_tags.extend(tags)
def remove_tags(self, tags: List[str]) -> None:
for tag in tags:
self.tags.remove(tag)
self.inheritable_tags.remove(tag)
def add_metadata(self, metadata: Dict[str, Any], inherit: bool = True) -> None:
self.metadata.update(metadata)
if inherit:
self.inheritable_metadata.update(metadata)
def remove_metadata(self, keys: List[str]) -> None:
for key in keys:
self.metadata.pop(key)
self.inheritable_metadata.pop(key)
Callbacks = Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | "2023-07-31T21:01:43Z" | python | "2023-08-14T23:45:17Z" | libs/langchain/tests/unit_tests/callbacks/test_openai_info.py | import pytest
from langchain.callbacks import OpenAICallbackHandler
from langchain.llms.openai import BaseOpenAI
from langchain.schema import LLMResult
@pytest.fixture
def handler() -> OpenAICallbackHandler:
return OpenAICallbackHandler()
def test_on_llm_end(handler: OpenAICallbackHandler) -> None:
response = LLMResult(
generations=[],
llm_output={
"token_usage": {
"prompt_tokens": 2,
"completion_tokens": 1,
"total_tokens": 3,
},
"model_name": BaseOpenAI.__fields__["model_name"].default,
},
)
handler.on_llm_end(response)
assert handler.successful_requests == 1
assert handler.total_tokens == 3
assert handler.prompt_tokens == 2
assert handler.completion_tokens == 1
assert handler.total_cost > 0
def test_on_llm_end_custom_model(handler: OpenAICallbackHandler) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | "2023-07-31T21:01:43Z" | python | "2023-08-14T23:45:17Z" | libs/langchain/tests/unit_tests/callbacks/test_openai_info.py | response = LLMResult(
generations=[],
llm_output={
"token_usage": {
"prompt_tokens": 2,
"completion_tokens": 1,
"total_tokens": 3,
},
"model_name": "foo-bar",
},
)
handler.on_llm_end(response)
assert handler.total_cost == 0
def test_on_llm_end_finetuned_model(handler: OpenAICallbackHandler) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | "2023-07-31T21:01:43Z" | python | "2023-08-14T23:45:17Z" | libs/langchain/tests/unit_tests/callbacks/test_openai_info.py | response = LLMResult(
generations=[],
llm_output={
"token_usage": {
"prompt_tokens": 2,
"completion_tokens": 1,
"total_tokens": 3,
},
"model_name": "ada:ft-your-org:custom-model-name-2022-02-15-04-21-04",
},
)
handler.on_llm_end(response)
assert handler.total_cost > 0
@pytest.mark.parametrize(
"model_name,expected_cost",
[
("gpt-35-turbo", 0.0035),
("gpt-35-turbo-0301", 0.0035),
(
"gpt-35-turbo-0613",
0.0035,
),
(
"gpt-35-turbo-16k-0613",
0.007,
),
(
"gpt-35-turbo-16k", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | "2023-07-31T21:01:43Z" | python | "2023-08-14T23:45:17Z" | libs/langchain/tests/unit_tests/callbacks/test_openai_info.py | 0.007,
),
("gpt-4", 0.09),
("gpt-4-0314", 0.09),
("gpt-4-0613", 0.09),
("gpt-4-32k", 0.18),
("gpt-4-32k-0314", 0.18),
("gpt-4-32k-0613", 0.18),
],
)
def test_on_llm_end_azure_openai(
handler: OpenAICallbackHandler, model_name: str, expected_cost: float
) -> None:
response = LLMResult(
generations=[],
llm_output={
"token_usage": {
"prompt_tokens": 1000,
"completion_tokens": 1000,
"total_tokens": 2000,
},
"model_name": model_name,
},
)
handler.on_llm_end(response)
assert handler.total_cost == expected_cost
@pytest.mark.parametrize(
"model_name", ["gpt-35-turbo-16k-0301", "gpt-4-0301", "gpt-4-32k-0301"]
)
def test_on_llm_end_no_cost_invalid_model( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,542 | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | https://github.com/langchain-ai/langchain/issues/8542 | https://github.com/langchain-ai/langchain/pull/9230 | d0a0d560add6c5bc6ded60be506a87d98bf333c3 | c478fc208ed4c29e979abeb7a532eb4d01431e1b | "2023-07-31T21:01:43Z" | python | "2023-08-14T23:45:17Z" | libs/langchain/tests/unit_tests/callbacks/test_openai_info.py | handler: OpenAICallbackHandler, model_name: str
) -> None:
response = LLMResult(
generations=[],
llm_output={
"token_usage": {
"prompt_tokens": 1000,
"completion_tokens": 1000,
"total_tokens": 2000,
},
"model_name": model_name,
},
)
handler.on_llm_end(response)
assert handler.total_cost == 0 |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,184 | Issue: RetrievalQAWithSourcesChain gives error 'too many values to unpack (expected 2)' after running. | Hello,
I'm using _langchain_ for QA with court case documents. More specifically, the RetrievalQAWithSourcesChain to retrieve the answer and document source information. However, when running the chain with embedded documents, I get the following error:
```
ValueError: too many values to unpack (expected 2)
Traceback:
response = qa({"question": pregunta}, return_only_outputs=True)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 166, in __call__
raise e
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 160, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\qa_with_sources\base.py", line 132, in _call
answer, sources = re.split(r"SOURCES:\s", answer)
```
The passed documents are the sections from the court case. I added the following **metadata** fields:
1. Source: PDF file name.
2. Section: Name of the section
3. Section_chunk: Numeral value used for identification in case the section was divided into chunks.
4. Page: Page where the section chunk starts.
The documents are passed as retriever to the chain with FAISS (FAISS.from_documents(documents, self.embeddings)).
I tried out two approaches (both resulting in the same error):
1. providing the _load_qa_chain_ as chain
2. creating it using the class method **_.from_chain_type_**
My question is why does this error ocurrs. And also, if the type of metadata used may cause the errors.
Thank you in advance! | https://github.com/langchain-ai/langchain/issues/7184 | https://github.com/langchain-ai/langchain/pull/8716 | a3c79b1909fe1cbe85394c353b0535117ef0cdf0 | 8bebc9206fb77ee22a9b0592c1efb32f27bb45db | "2023-07-05T09:49:42Z" | python | "2023-08-16T20:30:15Z" | libs/langchain/langchain/chains/qa_with_sources/base.py | """Question answering with sources over documents."""
from __future__ import annotations
import inspect
import re
from abc import ABC, abstractmethod
from typing import Any, Dict, List, Optional
from pydantic_v1 import Extra, root_validator
from langchain.callbacks.manager import (
AsyncCallbackManagerForChainRun,
CallbackManagerForChainRun,
)
from langchain.chains import ReduceDocumentsChain
from langchain.chains.base import Chain
from langchain.chains.combine_documents.base import BaseCombineDocumentsChain
from langchain.chains.combine_documents.map_reduce import MapReduceDocumentsChain
from langchain.chains.combine_documents.stuff import StuffDocumentsChain
from langchain.chains.llm import LLMChain
from langchain.chains.qa_with_sources.loading import load_qa_with_sources_chain
from langchain.chains.qa_with_sources.map_reduce_prompt import (
COMBINE_PROMPT,
EXAMPLE_PROMPT,
QUESTION_PROMPT,
)
from langchain.docstore.document import Document
from langchain.schema import BasePromptTemplate
from langchain.schema.language_model import BaseLanguageModel
class BaseQAWithSourcesChain(Chain, ABC): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,184 | Issue: RetrievalQAWithSourcesChain gives error 'too many values to unpack (expected 2)' after running. | Hello,
I'm using _langchain_ for QA with court case documents. More specifically, the RetrievalQAWithSourcesChain to retrieve the answer and document source information. However, when running the chain with embedded documents, I get the following error:
```
ValueError: too many values to unpack (expected 2)
Traceback:
response = qa({"question": pregunta}, return_only_outputs=True)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 166, in __call__
raise e
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 160, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\qa_with_sources\base.py", line 132, in _call
answer, sources = re.split(r"SOURCES:\s", answer)
```
The passed documents are the sections from the court case. I added the following **metadata** fields:
1. Source: PDF file name.
2. Section: Name of the section
3. Section_chunk: Numeral value used for identification in case the section was divided into chunks.
4. Page: Page where the section chunk starts.
The documents are passed as retriever to the chain with FAISS (FAISS.from_documents(documents, self.embeddings)).
I tried out two approaches (both resulting in the same error):
1. providing the _load_qa_chain_ as chain
2. creating it using the class method **_.from_chain_type_**
My question is why does this error ocurrs. And also, if the type of metadata used may cause the errors.
Thank you in advance! | https://github.com/langchain-ai/langchain/issues/7184 | https://github.com/langchain-ai/langchain/pull/8716 | a3c79b1909fe1cbe85394c353b0535117ef0cdf0 | 8bebc9206fb77ee22a9b0592c1efb32f27bb45db | "2023-07-05T09:49:42Z" | python | "2023-08-16T20:30:15Z" | libs/langchain/langchain/chains/qa_with_sources/base.py | """Question answering chain with sources over documents."""
combine_documents_chain: BaseCombineDocumentsChain
"""Chain to use to combine documents."""
question_key: str = "question"
input_docs_key: str = "docs"
answer_key: str = "answer"
sources_answer_key: str = "sources"
return_source_documents: bool = False
"""Return the source documents."""
@classmethod
def from_llm( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,184 | Issue: RetrievalQAWithSourcesChain gives error 'too many values to unpack (expected 2)' after running. | Hello,
I'm using _langchain_ for QA with court case documents. More specifically, the RetrievalQAWithSourcesChain to retrieve the answer and document source information. However, when running the chain with embedded documents, I get the following error:
```
ValueError: too many values to unpack (expected 2)
Traceback:
response = qa({"question": pregunta}, return_only_outputs=True)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 166, in __call__
raise e
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 160, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\qa_with_sources\base.py", line 132, in _call
answer, sources = re.split(r"SOURCES:\s", answer)
```
The passed documents are the sections from the court case. I added the following **metadata** fields:
1. Source: PDF file name.
2. Section: Name of the section
3. Section_chunk: Numeral value used for identification in case the section was divided into chunks.
4. Page: Page where the section chunk starts.
The documents are passed as retriever to the chain with FAISS (FAISS.from_documents(documents, self.embeddings)).
I tried out two approaches (both resulting in the same error):
1. providing the _load_qa_chain_ as chain
2. creating it using the class method **_.from_chain_type_**
My question is why does this error ocurrs. And also, if the type of metadata used may cause the errors.
Thank you in advance! | https://github.com/langchain-ai/langchain/issues/7184 | https://github.com/langchain-ai/langchain/pull/8716 | a3c79b1909fe1cbe85394c353b0535117ef0cdf0 | 8bebc9206fb77ee22a9b0592c1efb32f27bb45db | "2023-07-05T09:49:42Z" | python | "2023-08-16T20:30:15Z" | libs/langchain/langchain/chains/qa_with_sources/base.py | cls,
llm: BaseLanguageModel,
document_prompt: BasePromptTemplate = EXAMPLE_PROMPT,
question_prompt: BasePromptTemplate = QUESTION_PROMPT,
combine_prompt: BasePromptTemplate = COMBINE_PROMPT,
**kwargs: Any,
) -> BaseQAWithSourcesChain:
"""Construct the chain from an LLM."""
llm_question_chain = LLMChain(llm=llm, prompt=question_prompt)
llm_combine_chain = LLMChain(llm=llm, prompt=combine_prompt)
combine_results_chain = StuffDocumentsChain(
llm_chain=llm_combine_chain,
document_prompt=document_prompt,
document_variable_name="summaries",
)
reduce_documents_chain = ReduceDocumentsChain(
combine_documents_chain=combine_results_chain
)
combine_documents_chain = MapReduceDocumentsChain(
llm_chain=llm_question_chain,
reduce_documents_chain=reduce_documents_chain,
document_variable_name="context",
)
return cls(
combine_documents_chain=combine_documents_chain,
**kwargs,
)
@classmethod
def from_chain_type( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,184 | Issue: RetrievalQAWithSourcesChain gives error 'too many values to unpack (expected 2)' after running. | Hello,
I'm using _langchain_ for QA with court case documents. More specifically, the RetrievalQAWithSourcesChain to retrieve the answer and document source information. However, when running the chain with embedded documents, I get the following error:
```
ValueError: too many values to unpack (expected 2)
Traceback:
response = qa({"question": pregunta}, return_only_outputs=True)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 166, in __call__
raise e
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 160, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\qa_with_sources\base.py", line 132, in _call
answer, sources = re.split(r"SOURCES:\s", answer)
```
The passed documents are the sections from the court case. I added the following **metadata** fields:
1. Source: PDF file name.
2. Section: Name of the section
3. Section_chunk: Numeral value used for identification in case the section was divided into chunks.
4. Page: Page where the section chunk starts.
The documents are passed as retriever to the chain with FAISS (FAISS.from_documents(documents, self.embeddings)).
I tried out two approaches (both resulting in the same error):
1. providing the _load_qa_chain_ as chain
2. creating it using the class method **_.from_chain_type_**
My question is why does this error ocurrs. And also, if the type of metadata used may cause the errors.
Thank you in advance! | https://github.com/langchain-ai/langchain/issues/7184 | https://github.com/langchain-ai/langchain/pull/8716 | a3c79b1909fe1cbe85394c353b0535117ef0cdf0 | 8bebc9206fb77ee22a9b0592c1efb32f27bb45db | "2023-07-05T09:49:42Z" | python | "2023-08-16T20:30:15Z" | libs/langchain/langchain/chains/qa_with_sources/base.py | cls,
llm: BaseLanguageModel,
chain_type: str = "stuff",
chain_type_kwargs: Optional[dict] = None,
**kwargs: Any,
) -> BaseQAWithSourcesChain:
"""Load chain from chain type."""
_chain_kwargs = chain_type_kwargs or {}
combine_documents_chain = load_qa_with_sources_chain(
llm, chain_type=chain_type, **_chain_kwargs
)
return cls(combine_documents_chain=combine_documents_chain, **kwargs)
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
@property
def input_keys(self) -> List[str]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,184 | Issue: RetrievalQAWithSourcesChain gives error 'too many values to unpack (expected 2)' after running. | Hello,
I'm using _langchain_ for QA with court case documents. More specifically, the RetrievalQAWithSourcesChain to retrieve the answer and document source information. However, when running the chain with embedded documents, I get the following error:
```
ValueError: too many values to unpack (expected 2)
Traceback:
response = qa({"question": pregunta}, return_only_outputs=True)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 166, in __call__
raise e
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 160, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\qa_with_sources\base.py", line 132, in _call
answer, sources = re.split(r"SOURCES:\s", answer)
```
The passed documents are the sections from the court case. I added the following **metadata** fields:
1. Source: PDF file name.
2. Section: Name of the section
3. Section_chunk: Numeral value used for identification in case the section was divided into chunks.
4. Page: Page where the section chunk starts.
The documents are passed as retriever to the chain with FAISS (FAISS.from_documents(documents, self.embeddings)).
I tried out two approaches (both resulting in the same error):
1. providing the _load_qa_chain_ as chain
2. creating it using the class method **_.from_chain_type_**
My question is why does this error ocurrs. And also, if the type of metadata used may cause the errors.
Thank you in advance! | https://github.com/langchain-ai/langchain/issues/7184 | https://github.com/langchain-ai/langchain/pull/8716 | a3c79b1909fe1cbe85394c353b0535117ef0cdf0 | 8bebc9206fb77ee22a9b0592c1efb32f27bb45db | "2023-07-05T09:49:42Z" | python | "2023-08-16T20:30:15Z" | libs/langchain/langchain/chains/qa_with_sources/base.py | """Expect input key.
:meta private:
"""
return [self.question_key]
@property
def output_keys(self) -> List[str]:
"""Return output key.
:meta private:
"""
_output_keys = [self.answer_key, self.sources_answer_key]
if self.return_source_documents:
_output_keys = _output_keys + ["source_documents"]
return _output_keys
@root_validator(pre=True)
def validate_naming(cls, values: Dict) -> Dict:
"""Fix backwards compatibility in naming."""
if "combine_document_chain" in values:
values["combine_documents_chain"] = values.pop("combine_document_chain")
return values
@abstractmethod
def _get_docs(
self,
inputs: Dict[str, Any],
*,
run_manager: CallbackManagerForChainRun,
) -> List[Document]:
"""Get docs to run questioning over."""
def _call( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,184 | Issue: RetrievalQAWithSourcesChain gives error 'too many values to unpack (expected 2)' after running. | Hello,
I'm using _langchain_ for QA with court case documents. More specifically, the RetrievalQAWithSourcesChain to retrieve the answer and document source information. However, when running the chain with embedded documents, I get the following error:
```
ValueError: too many values to unpack (expected 2)
Traceback:
response = qa({"question": pregunta}, return_only_outputs=True)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 166, in __call__
raise e
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 160, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\qa_with_sources\base.py", line 132, in _call
answer, sources = re.split(r"SOURCES:\s", answer)
```
The passed documents are the sections from the court case. I added the following **metadata** fields:
1. Source: PDF file name.
2. Section: Name of the section
3. Section_chunk: Numeral value used for identification in case the section was divided into chunks.
4. Page: Page where the section chunk starts.
The documents are passed as retriever to the chain with FAISS (FAISS.from_documents(documents, self.embeddings)).
I tried out two approaches (both resulting in the same error):
1. providing the _load_qa_chain_ as chain
2. creating it using the class method **_.from_chain_type_**
My question is why does this error ocurrs. And also, if the type of metadata used may cause the errors.
Thank you in advance! | https://github.com/langchain-ai/langchain/issues/7184 | https://github.com/langchain-ai/langchain/pull/8716 | a3c79b1909fe1cbe85394c353b0535117ef0cdf0 | 8bebc9206fb77ee22a9b0592c1efb32f27bb45db | "2023-07-05T09:49:42Z" | python | "2023-08-16T20:30:15Z" | libs/langchain/langchain/chains/qa_with_sources/base.py | self,
inputs: Dict[str, Any],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, str]:
_run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
accepts_run_manager = (
"run_manager" in inspect.signature(self._get_docs).parameters
)
if accepts_run_manager:
docs = self._get_docs(inputs, run_manager=_run_manager)
else:
docs = self._get_docs(inputs)
answer = self.combine_documents_chain.run(
input_documents=docs, callbacks=_run_manager.get_child(), **inputs
)
if re.search(r"SOURCES:\s", answer):
answer, sources = re.split(r"SOURCES:\s", answer)
else:
sources = ""
result: Dict[str, Any] = {
self.answer_key: answer,
self.sources_answer_key: sources,
}
if self.return_source_documents:
result["source_documents"] = docs
return result
@abstractmethod
async def _aget_docs( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,184 | Issue: RetrievalQAWithSourcesChain gives error 'too many values to unpack (expected 2)' after running. | Hello,
I'm using _langchain_ for QA with court case documents. More specifically, the RetrievalQAWithSourcesChain to retrieve the answer and document source information. However, when running the chain with embedded documents, I get the following error:
```
ValueError: too many values to unpack (expected 2)
Traceback:
response = qa({"question": pregunta}, return_only_outputs=True)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 166, in __call__
raise e
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 160, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\qa_with_sources\base.py", line 132, in _call
answer, sources = re.split(r"SOURCES:\s", answer)
```
The passed documents are the sections from the court case. I added the following **metadata** fields:
1. Source: PDF file name.
2. Section: Name of the section
3. Section_chunk: Numeral value used for identification in case the section was divided into chunks.
4. Page: Page where the section chunk starts.
The documents are passed as retriever to the chain with FAISS (FAISS.from_documents(documents, self.embeddings)).
I tried out two approaches (both resulting in the same error):
1. providing the _load_qa_chain_ as chain
2. creating it using the class method **_.from_chain_type_**
My question is why does this error ocurrs. And also, if the type of metadata used may cause the errors.
Thank you in advance! | https://github.com/langchain-ai/langchain/issues/7184 | https://github.com/langchain-ai/langchain/pull/8716 | a3c79b1909fe1cbe85394c353b0535117ef0cdf0 | 8bebc9206fb77ee22a9b0592c1efb32f27bb45db | "2023-07-05T09:49:42Z" | python | "2023-08-16T20:30:15Z" | libs/langchain/langchain/chains/qa_with_sources/base.py | self,
inputs: Dict[str, Any],
*,
run_manager: AsyncCallbackManagerForChainRun,
) -> List[Document]:
"""Get docs to run questioning over."""
async def _acall( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,184 | Issue: RetrievalQAWithSourcesChain gives error 'too many values to unpack (expected 2)' after running. | Hello,
I'm using _langchain_ for QA with court case documents. More specifically, the RetrievalQAWithSourcesChain to retrieve the answer and document source information. However, when running the chain with embedded documents, I get the following error:
```
ValueError: too many values to unpack (expected 2)
Traceback:
response = qa({"question": pregunta}, return_only_outputs=True)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 166, in __call__
raise e
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 160, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\qa_with_sources\base.py", line 132, in _call
answer, sources = re.split(r"SOURCES:\s", answer)
```
The passed documents are the sections from the court case. I added the following **metadata** fields:
1. Source: PDF file name.
2. Section: Name of the section
3. Section_chunk: Numeral value used for identification in case the section was divided into chunks.
4. Page: Page where the section chunk starts.
The documents are passed as retriever to the chain with FAISS (FAISS.from_documents(documents, self.embeddings)).
I tried out two approaches (both resulting in the same error):
1. providing the _load_qa_chain_ as chain
2. creating it using the class method **_.from_chain_type_**
My question is why does this error ocurrs. And also, if the type of metadata used may cause the errors.
Thank you in advance! | https://github.com/langchain-ai/langchain/issues/7184 | https://github.com/langchain-ai/langchain/pull/8716 | a3c79b1909fe1cbe85394c353b0535117ef0cdf0 | 8bebc9206fb77ee22a9b0592c1efb32f27bb45db | "2023-07-05T09:49:42Z" | python | "2023-08-16T20:30:15Z" | libs/langchain/langchain/chains/qa_with_sources/base.py | self,
inputs: Dict[str, Any],
run_manager: Optional[AsyncCallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
_run_manager = run_manager or AsyncCallbackManagerForChainRun.get_noop_manager()
accepts_run_manager = (
"run_manager" in inspect.signature(self._aget_docs).parameters
)
if accepts_run_manager:
docs = await self._aget_docs(inputs, run_manager=_run_manager)
else:
docs = await self._aget_docs(inputs)
answer = await self.combine_documents_chain.arun(
input_documents=docs, callbacks=_run_manager.get_child(), **inputs
)
if re.search(r"SOURCES:\s", answer):
answer, sources = re.split(r"SOURCES:\s", answer)
else:
sources = ""
result: Dict[str, Any] = {
self.answer_key: answer,
self.sources_answer_key: sources,
}
if self.return_source_documents:
result["source_documents"] = docs
return result
class QAWithSourcesChain(BaseQAWithSourcesChain): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,184 | Issue: RetrievalQAWithSourcesChain gives error 'too many values to unpack (expected 2)' after running. | Hello,
I'm using _langchain_ for QA with court case documents. More specifically, the RetrievalQAWithSourcesChain to retrieve the answer and document source information. However, when running the chain with embedded documents, I get the following error:
```
ValueError: too many values to unpack (expected 2)
Traceback:
response = qa({"question": pregunta}, return_only_outputs=True)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 166, in __call__
raise e
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 160, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\qa_with_sources\base.py", line 132, in _call
answer, sources = re.split(r"SOURCES:\s", answer)
```
The passed documents are the sections from the court case. I added the following **metadata** fields:
1. Source: PDF file name.
2. Section: Name of the section
3. Section_chunk: Numeral value used for identification in case the section was divided into chunks.
4. Page: Page where the section chunk starts.
The documents are passed as retriever to the chain with FAISS (FAISS.from_documents(documents, self.embeddings)).
I tried out two approaches (both resulting in the same error):
1. providing the _load_qa_chain_ as chain
2. creating it using the class method **_.from_chain_type_**
My question is why does this error ocurrs. And also, if the type of metadata used may cause the errors.
Thank you in advance! | https://github.com/langchain-ai/langchain/issues/7184 | https://github.com/langchain-ai/langchain/pull/8716 | a3c79b1909fe1cbe85394c353b0535117ef0cdf0 | 8bebc9206fb77ee22a9b0592c1efb32f27bb45db | "2023-07-05T09:49:42Z" | python | "2023-08-16T20:30:15Z" | libs/langchain/langchain/chains/qa_with_sources/base.py | """Question answering with sources over documents."""
input_docs_key: str = "docs"
@property
def input_keys(self) -> List[str]:
"""Expect input key.
:meta private:
"""
return [self.input_docs_key, self.question_key]
def _get_docs(
self,
inputs: Dict[str, Any],
*,
run_manager: CallbackManagerForChainRun,
) -> List[Document]:
"""Get docs to run questioning over."""
return inputs.pop(self.input_docs_key)
async def _aget_docs(
self,
inputs: Dict[str, Any],
*,
run_manager: AsyncCallbackManagerForChainRun,
) -> List[Document]:
"""Get docs to run questioning over."""
return inputs.pop(self.input_docs_key)
@property
def _chain_type(self) -> str:
return "qa_with_sources_chain" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,117 | Missing new lines or empty spaces in refine default prompt. | I'm not sure if it's a typo or not but the default prompt in [langchain](https://github.com/hwchase17/langchain/tree/master/langchain)/[langchain](https://github.com/hwchase17/langchain/tree/master/langchain)/[chains](https://github.com/hwchase17/langchain/tree/master/langchain/chains)/[summarize](https://github.com/hwchase17/langchain/tree/master/langchain/chains/summarize)/[refine_prompts.py](https://github.com/hwchase17/langchain/tree/master/langchain/chains/summarize/refine_prompts.py) seems to miss a empty string or a `\n `
```
REFINE_PROMPT_TMPL = (
"Your job is to produce a final summary\n"
"We have provided an existing summary up to a certain point: {existing_answer}\n"
"We have the opportunity to refine the existing summary"
"(only if needed) with some more context below.\n"
"------------\n"
"{text}\n"
"------------\n"
"Given the new context, refine the original summary"
"If the context isn't useful, return the original summary."
)
```
It will produce `refine the original summaryIf the context isn't useful` and `existing summary(only if needed)`
I could proabbly fix it with a PR ( if it's unintentionnal), but I prefer to let someone more competent to do it as i'm not used to create PR's in large projects like this. | https://github.com/langchain-ai/langchain/issues/3117 | https://github.com/langchain-ai/langchain/pull/9957 | 4b1532876710e08aa70cdd0d52b18084f85eaed3 | 29270e0378661fe3d5a77cbe95311f9d4b5d33e8 | "2023-04-18T22:32:58Z" | python | "2023-08-31T14:29:49Z" | libs/langchain/langchain/chains/question_answering/refine_prompts.py | from langchain.chains.prompt_selector import ConditionalPromptSelector, is_chat_model
from langchain.prompts.chat import (
AIMessagePromptTemplate,
ChatPromptTemplate,
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
)
from langchain.prompts.prompt import PromptTemplate
DEFAULT_REFINE_PROMPT_TMPL = (
"The original question is as follows: {question}\n"
"We have provided an existing answer: {existing_answer}\n"
"We have the opportunity to refine the existing answer"
"(only if needed) with some more context below.\n" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,117 | Missing new lines or empty spaces in refine default prompt. | I'm not sure if it's a typo or not but the default prompt in [langchain](https://github.com/hwchase17/langchain/tree/master/langchain)/[langchain](https://github.com/hwchase17/langchain/tree/master/langchain)/[chains](https://github.com/hwchase17/langchain/tree/master/langchain/chains)/[summarize](https://github.com/hwchase17/langchain/tree/master/langchain/chains/summarize)/[refine_prompts.py](https://github.com/hwchase17/langchain/tree/master/langchain/chains/summarize/refine_prompts.py) seems to miss a empty string or a `\n `
```
REFINE_PROMPT_TMPL = (
"Your job is to produce a final summary\n"
"We have provided an existing summary up to a certain point: {existing_answer}\n"
"We have the opportunity to refine the existing summary"
"(only if needed) with some more context below.\n"
"------------\n"
"{text}\n"
"------------\n"
"Given the new context, refine the original summary"
"If the context isn't useful, return the original summary."
)
```
It will produce `refine the original summaryIf the context isn't useful` and `existing summary(only if needed)`
I could proabbly fix it with a PR ( if it's unintentionnal), but I prefer to let someone more competent to do it as i'm not used to create PR's in large projects like this. | https://github.com/langchain-ai/langchain/issues/3117 | https://github.com/langchain-ai/langchain/pull/9957 | 4b1532876710e08aa70cdd0d52b18084f85eaed3 | 29270e0378661fe3d5a77cbe95311f9d4b5d33e8 | "2023-04-18T22:32:58Z" | python | "2023-08-31T14:29:49Z" | libs/langchain/langchain/chains/question_answering/refine_prompts.py | "------------\n"
"{context_str}\n"
"------------\n"
"Given the new context, refine the original answer to better "
"answer the question. "
"If the context isn't useful, return the original answer."
)
DEFAULT_REFINE_PROMPT = PromptTemplate(
input_variables=["question", "existing_answer", "context_str"],
template=DEFAULT_REFINE_PROMPT_TMPL,
)
refine_template = (
"We have the opportunity to refine the existing answer"
"(only if needed) with some more context below.\n"
"------------\n"
"{context_str}\n"
"------------\n"
"Given the new context, refine the original answer to better "
"answer the question. "
"If the context isn't useful, return the original answer."
)
messages = [
HumanMessagePromptTemplate.from_template("{question}"),
AIMessagePromptTemplate.from_template("{existing_answer}"),
HumanMessagePromptTemplate.from_template(refine_template),
]
CHAT_REFINE_PROMPT = ChatPromptTemplate.from_messages(messages)
REFINE_PROMPT_SELECTOR = ConditionalPromptSelector(
default_prompt=DEFAULT_REFINE_PROMPT,
conditionals=[(is_chat_model, CHAT_REFINE_PROMPT)], |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,117 | Missing new lines or empty spaces in refine default prompt. | I'm not sure if it's a typo or not but the default prompt in [langchain](https://github.com/hwchase17/langchain/tree/master/langchain)/[langchain](https://github.com/hwchase17/langchain/tree/master/langchain)/[chains](https://github.com/hwchase17/langchain/tree/master/langchain/chains)/[summarize](https://github.com/hwchase17/langchain/tree/master/langchain/chains/summarize)/[refine_prompts.py](https://github.com/hwchase17/langchain/tree/master/langchain/chains/summarize/refine_prompts.py) seems to miss a empty string or a `\n `
```
REFINE_PROMPT_TMPL = (
"Your job is to produce a final summary\n"
"We have provided an existing summary up to a certain point: {existing_answer}\n"
"We have the opportunity to refine the existing summary"
"(only if needed) with some more context below.\n"
"------------\n"
"{text}\n"
"------------\n"
"Given the new context, refine the original summary"
"If the context isn't useful, return the original summary."
)
```
It will produce `refine the original summaryIf the context isn't useful` and `existing summary(only if needed)`
I could proabbly fix it with a PR ( if it's unintentionnal), but I prefer to let someone more competent to do it as i'm not used to create PR's in large projects like this. | https://github.com/langchain-ai/langchain/issues/3117 | https://github.com/langchain-ai/langchain/pull/9957 | 4b1532876710e08aa70cdd0d52b18084f85eaed3 | 29270e0378661fe3d5a77cbe95311f9d4b5d33e8 | "2023-04-18T22:32:58Z" | python | "2023-08-31T14:29:49Z" | libs/langchain/langchain/chains/question_answering/refine_prompts.py | )
DEFAULT_TEXT_QA_PROMPT_TMPL = (
"Context information is below. \n"
"---------------------\n"
"{context_str}"
"\n---------------------\n"
"Given the context information and not prior knowledge, "
"answer the question: {question}\n"
)
DEFAULT_TEXT_QA_PROMPT = PromptTemplate(
input_variables=["context_str", "question"], template=DEFAULT_TEXT_QA_PROMPT_TMPL
)
chat_qa_prompt_template = (
"Context information is below. \n"
"---------------------\n"
"{context_str}"
"\n---------------------\n"
"Given the context information and not prior knowledge, "
"answer any questions"
)
messages = [
SystemMessagePromptTemplate.from_template(chat_qa_prompt_template),
HumanMessagePromptTemplate.from_template("{question}"),
]
CHAT_QUESTION_PROMPT = ChatPromptTemplate.from_messages(messages)
QUESTION_PROMPT_SELECTOR = ConditionalPromptSelector(
default_prompt=DEFAULT_TEXT_QA_PROMPT,
conditionals=[(is_chat_model, CHAT_QUESTION_PROMPT)],
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 3,117 | Missing new lines or empty spaces in refine default prompt. | I'm not sure if it's a typo or not but the default prompt in [langchain](https://github.com/hwchase17/langchain/tree/master/langchain)/[langchain](https://github.com/hwchase17/langchain/tree/master/langchain)/[chains](https://github.com/hwchase17/langchain/tree/master/langchain/chains)/[summarize](https://github.com/hwchase17/langchain/tree/master/langchain/chains/summarize)/[refine_prompts.py](https://github.com/hwchase17/langchain/tree/master/langchain/chains/summarize/refine_prompts.py) seems to miss a empty string or a `\n `
```
REFINE_PROMPT_TMPL = (
"Your job is to produce a final summary\n"
"We have provided an existing summary up to a certain point: {existing_answer}\n"
"We have the opportunity to refine the existing summary"
"(only if needed) with some more context below.\n"
"------------\n"
"{text}\n"
"------------\n"
"Given the new context, refine the original summary"
"If the context isn't useful, return the original summary."
)
```
It will produce `refine the original summaryIf the context isn't useful` and `existing summary(only if needed)`
I could proabbly fix it with a PR ( if it's unintentionnal), but I prefer to let someone more competent to do it as i'm not used to create PR's in large projects like this. | https://github.com/langchain-ai/langchain/issues/3117 | https://github.com/langchain-ai/langchain/pull/9957 | 4b1532876710e08aa70cdd0d52b18084f85eaed3 | 29270e0378661fe3d5a77cbe95311f9d4b5d33e8 | "2023-04-18T22:32:58Z" | python | "2023-08-31T14:29:49Z" | libs/langchain/langchain/chains/summarize/refine_prompts.py | from langchain.prompts import PromptTemplate
REFINE_PROMPT_TMPL = (
"Your job is to produce a final summary\n"
"We have provided an existing summary up to a certain point: {existing_answer}\n"
"We have the opportunity to refine the existing summary"
"(only if needed) with some more context below.\n"
"------------\n"
"{text}\n"
"------------\n"
"Given the new context, refine the original summary\n"
"If the context isn't useful, return the original summary."
)
REFINE_PROMPT = PromptTemplate(
input_variables=["existing_answer", "text"],
template=REFINE_PROMPT_TMPL,
)
prompt_template = """Write a concise summary of the following:
"{text}"
CONCISE SUMMARY:"""
PROMPT = PromptTemplate(template=prompt_template, input_variables=["text"]) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,307 | ImportError: cannot import name 'ApifyWrapper' from 'langchain.utilities' | ### System Info
Hi All,
I tried to run Apify tutorial and I ran on the issue of ImportError: cannot import name 'ApifyWrapper' from 'langchain.utilities'. I checked the Utilities library under utilities/__init__.py and I couldn't find anything under the Generic integrations with third-party systems and packages.
Any thoughts or support?
### Who can help?
@hwchase17, @agola
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import os
openai.api_key = os.environ["OPEN_API_KEY"]
os.environ["APIFY_API_TOKEN"] = "apify_api_qNa00bcYGUYFwIZltWiOuhskmer7E61VE6GN"
apify = ApifyWrapper()
loader = apify.call_actor(
actor_id="apify/website-content-crawler",
run_input={"startUrls": [{"url": "https://python.langchain.com/en/latest/"}]},
dataset_mapping_function=lambda item: Document(
page_content=item["text"] or "", metadata={"source": item["url"]}
),
)
index = VectorstoreIndexCreator().from_loaders([loader])
query = "What is LangChain?"
result = index.query_with_sources(query)
print(result["answer"])
print(result["sources"])
### Expected behavior
LangChain is a standard interface through which you can interact with a variety of large language models (LLMs). It provides modules that can be used to build language model applications, and it also provides chains and agents with memory capabilities.
https://python.langchain.com/en/latest/modules/models/llms.html, https://python.langchain.com/en/latest/getting_started/getting_started.html | https://github.com/langchain-ai/langchain/issues/8307 | https://github.com/langchain-ai/langchain/pull/10067 | 02e51f4217207eed4fc9ac89735cf1f660be3f10 | 86646ec555970e01130994dc75f3a0c5d4e52de9 | "2023-07-26T18:18:22Z" | python | "2023-08-31T22:47:44Z" | libs/langchain/langchain/utilities/__init__.py | """**Utilities** are the integrations with third-part systems and packages.
Other LangChain classes use **Utilities** to interact with third-part systems
and packages.
"""
from langchain.utilities.alpha_vantage import AlphaVantageAPIWrapper
from langchain.utilities.arxiv import ArxivAPIWrapper
from langchain.utilities.awslambda import LambdaWrapper
from langchain.utilities.bash import BashProcess
from langchain.utilities.bibtex import BibtexparserWrapper
from langchain.utilities.bing_search import BingSearchAPIWrapper
from langchain.utilities.brave_search import BraveSearchWrapper
from langchain.utilities.duckduckgo_search import DuckDuckGoSearchAPIWrapper
from langchain.utilities.golden_query import GoldenQueryAPIWrapper
from langchain.utilities.google_places_api import GooglePlacesAPIWrapper |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 8,307 | ImportError: cannot import name 'ApifyWrapper' from 'langchain.utilities' | ### System Info
Hi All,
I tried to run Apify tutorial and I ran on the issue of ImportError: cannot import name 'ApifyWrapper' from 'langchain.utilities'. I checked the Utilities library under utilities/__init__.py and I couldn't find anything under the Generic integrations with third-party systems and packages.
Any thoughts or support?
### Who can help?
@hwchase17, @agola
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import os
openai.api_key = os.environ["OPEN_API_KEY"]
os.environ["APIFY_API_TOKEN"] = "apify_api_qNa00bcYGUYFwIZltWiOuhskmer7E61VE6GN"
apify = ApifyWrapper()
loader = apify.call_actor(
actor_id="apify/website-content-crawler",
run_input={"startUrls": [{"url": "https://python.langchain.com/en/latest/"}]},
dataset_mapping_function=lambda item: Document(
page_content=item["text"] or "", metadata={"source": item["url"]}
),
)
index = VectorstoreIndexCreator().from_loaders([loader])
query = "What is LangChain?"
result = index.query_with_sources(query)
print(result["answer"])
print(result["sources"])
### Expected behavior
LangChain is a standard interface through which you can interact with a variety of large language models (LLMs). It provides modules that can be used to build language model applications, and it also provides chains and agents with memory capabilities.
https://python.langchain.com/en/latest/modules/models/llms.html, https://python.langchain.com/en/latest/getting_started/getting_started.html | https://github.com/langchain-ai/langchain/issues/8307 | https://github.com/langchain-ai/langchain/pull/10067 | 02e51f4217207eed4fc9ac89735cf1f660be3f10 | 86646ec555970e01130994dc75f3a0c5d4e52de9 | "2023-07-26T18:18:22Z" | python | "2023-08-31T22:47:44Z" | libs/langchain/langchain/utilities/__init__.py | from langchain.utilities.google_search import GoogleSearchAPIWrapper
from langchain.utilities.google_serper import GoogleSerperAPIWrapper
from langchain.utilities.graphql import GraphQLAPIWrapper
from langchain.utilities.jira import JiraAPIWrapper
from langchain.utilities.max_compute import MaxComputeAPIWrapper
from langchain.utilities.metaphor_search import MetaphorSearchAPIWrapper
from langchain.utilities.openweathermap import OpenWeatherMapAPIWrapper
from langchain.utilities.portkey import Portkey
from langchain.utilities.powerbi import PowerBIDataset
from langchain.utilities.pubmed import PubMedAPIWrapper
from langchain.utilities.python import PythonREPL
from langchain.utilities.requests import Requests, RequestsWrapper, TextRequestsWrapper
from langchain.utilities.scenexplain import SceneXplainAPIWrapper
from langchain.utilities.searx_search import SearxSearchWrapper
from langchain.utilities.serpapi import SerpAPIWrapper
from langchain.utilities.spark_sql import SparkSQL
from langchain.utilities.sql_database import SQLDatabase
from langchain.utilities.tensorflow_datasets import TensorflowDatasets
from langchain.utilities.twilio import TwilioAPIWrapper
from langchain.utilities.wikipedia import WikipediaAPIWrapper
from langchain.utilities.wolfram_alpha import WolframAlphaAPIWrapper
from langchain.utilities.zapier import ZapierNLAWrapper
__all__ = [
"AlphaVantageAPIWrapper",
"ArxivAPIWrapper",
"BashProcess",
"BibtexparserWrapper",
"BingSearchAPIWrapper",
"BraveSearchWrapper",
"DuckDuckGoSearchAPIWrapper", |
Subsets and Splits