status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
233
| body
stringlengths 0
186k
⌀ | issue_url
stringlengths 38
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown | updated_file
stringlengths 7
188
| chunk_content
stringlengths 1
1.03M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | ],
]
] = None,
) -> None:
"""Create a RunnableLambda from a callable, and async callable or both.
Accepts both sync and async variants to allow providing efficient
implementations for sync and async execution.
Args:
func: Either sync or async callable
afunc: An async callable that takes an input and returns an output.
"""
if afunc is not None:
self.afunc = afunc
if inspect.iscoroutinefunction(func):
if afunc is not None:
raise TypeError(
"Func was provided as a coroutine function, but afunc was "
"also provided. If providing both, func should be a regular "
"function to avoid ambiguity."
)
self.afunc = func
elif callable(func):
self.func = cast(Callable[[Input], Output], func)
else:
raise TypeError(
"Expected a callable type for `func`."
f"Instead got an unsupported type: {type(func)}"
)
@property
def InputType(self) -> Any: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | """The type of the input to this runnable."""
func = getattr(self, "func", None) or getattr(self, "afunc")
try:
params = inspect.signature(func).parameters
first_param = next(iter(params.values()), None)
if first_param and first_param.annotation != inspect.Parameter.empty:
return first_param.annotation
else:
return Any
except ValueError:
return Any
def get_input_schema( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | self, config: Optional[RunnableConfig] = None
) -> Type[BaseModel]:
"""The pydantic schema for the input to this runnable."""
func = getattr(self, "func", None) or getattr(self, "afunc")
if isinstance(func, itemgetter):
items = str(func).replace("operator.itemgetter(", "")[:-1].split(", ")
if all(
item[0] == "'" and item[-1] == "'" and len(item) > 2 for item in items
):
return create_model(
"RunnableLambdaInput",
**{item[1:-1]: (Any, None) for item in items},
)
else:
return create_model("RunnableLambdaInput", __root__=(List[Any], None))
if self.InputType != Any:
return super().get_input_schema(config)
if dict_keys := get_function_first_arg_dict_keys(func):
return create_model(
"RunnableLambdaInput",
**{key: (Any, None) for key in dict_keys},
)
return super().get_input_schema(config)
@property
def OutputType(self) -> Any: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | """The type of the output of this runnable as a type annotation."""
func = getattr(self, "func", None) or getattr(self, "afunc")
try:
sig = inspect.signature(func)
return (
sig.return_annotation
if sig.return_annotation != inspect.Signature.empty
else Any
)
except ValueError:
return Any
def __eq__(self, other: Any) -> bool:
if isinstance(other, RunnableLambda):
if hasattr(self, "func") and hasattr(other, "func"):
return self.func == other.func
elif hasattr(self, "afunc") and hasattr(other, "afunc"):
return self.afunc == other.afunc
else:
return False
else:
return False
def __repr__(self) -> str:
"""A string representation of this runnable."""
if hasattr(self, "func"):
return f"RunnableLambda({get_lambda_source(self.func) or '...'})"
elif hasattr(self, "afunc"):
return f"RunnableLambda(afunc={get_lambda_source(self.afunc) or '...'})"
else:
return "RunnableLambda(...)"
def _invoke( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | self,
input: Input,
run_manager: CallbackManagerForChainRun,
config: RunnableConfig,
**kwargs: Any,
) -> Output:
output = call_func_with_variable_args(
self.func, input, config, run_manager, **kwargs
)
if isinstance(output, Runnable):
recursion_limit = config["recursion_limit"]
if recursion_limit <= 0:
raise RecursionError(
f"Recursion limit reached when invoking {self} with input {input}."
)
output = output.invoke(
input,
patch_config(
config,
callbacks=run_manager.get_child(),
recursion_limit=recursion_limit - 1,
),
)
return output
async def _ainvoke( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | self,
input: Input,
run_manager: AsyncCallbackManagerForChainRun,
config: RunnableConfig,
**kwargs: Any,
) -> Output:
output = await acall_func_with_variable_args(
self.afunc, input, config, run_manager, **kwargs
)
if isinstance(output, Runnable):
recursion_limit = config["recursion_limit"]
if recursion_limit <= 0:
raise RecursionError(
f"Recursion limit reached when invoking {self} with input {input}."
)
output = await output.ainvoke(
input,
patch_config(
config,
callbacks=run_manager.get_child(),
recursion_limit=recursion_limit - 1,
),
)
return output
def _config( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | self, config: Optional[RunnableConfig], callable: Callable[..., Any]
) -> RunnableConfig:
config = config or {}
if config.get("run_name") is None:
try:
run_name = callable.__name__
except AttributeError:
run_name = None
if run_name is not None:
return patch_config(config, run_name=run_name)
return config
def invoke( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | self,
input: Input,
config: Optional[RunnableConfig] = None,
**kwargs: Optional[Any],
) -> Output:
"""Invoke this runnable synchronously."""
if hasattr(self, "func"):
return self._call_with_config(
self._invoke,
input,
self._config(config, self.func),
**kwargs,
)
else:
raise TypeError(
"Cannot invoke a coroutine function synchronously."
"Use `ainvoke` instead."
)
async def ainvoke( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | self,
input: Input,
config: Optional[RunnableConfig] = None,
**kwargs: Optional[Any],
) -> Output:
"""Invoke this runnable asynchronously."""
if hasattr(self, "afunc"):
return await self._acall_with_config(
self._ainvoke,
input,
self._config(config, self.afunc),
**kwargs,
)
else:
return await super().ainvoke(input, config)
class RunnableEachBase(RunnableSerializable[List[Input], List[Output]]): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | """
A runnable that delegates calls to another runnable
with each element of the input sequence.
Use only if creating a new RunnableEach subclass with different __init__ args.
"""
bound: Runnable[Input, Output]
class Config:
arbitrary_types_allowed = True
@property
def InputType(self) -> Any:
return List[self.bound.InputType]
def get_input_schema(
self, config: Optional[RunnableConfig] = None
) -> Type[BaseModel]:
return create_model(
"RunnableEachInput",
__root__=(
List[self.bound.get_input_schema(config)],
None,
),
)
@property
def OutputType(self) -> Type[List[Output]]:
return List[self.bound.OutputType]
def get_output_schema( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | self, config: Optional[RunnableConfig] = None
) -> Type[BaseModel]:
schema = self.bound.get_output_schema(config)
return create_model(
"RunnableEachOutput",
__root__=(
List[schema],
None,
),
)
@property
def config_specs(self) -> List[ConfigurableFieldSpec]:
return self.bound.config_specs
@classmethod
def is_lc_serializable(cls) -> bool:
return True
@classmethod
def get_lc_namespace(cls) -> List[str]:
return cls.__module__.split(".")[:-1]
def _invoke(
self,
inputs: List[Input],
run_manager: CallbackManagerForChainRun,
config: RunnableConfig,
**kwargs: Any,
) -> List[Output]:
return self.bound.batch(
inputs, patch_config(config, callbacks=run_manager.get_child()), **kwargs
)
def invoke( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | self, input: List[Input], config: Optional[RunnableConfig] = None, **kwargs: Any
) -> List[Output]:
return self._call_with_config(self._invoke, input, config, **kwargs)
async def _ainvoke(
self,
inputs: List[Input],
run_manager: AsyncCallbackManagerForChainRun,
config: RunnableConfig,
**kwargs: Any,
) -> List[Output]:
return await self.bound.abatch(
inputs, patch_config(config, callbacks=run_manager.get_child()), **kwargs
)
async def ainvoke(
self, input: List[Input], config: Optional[RunnableConfig] = None, **kwargs: Any
) -> List[Output]:
return await self._acall_with_config(self._ainvoke, input, config, **kwargs)
class RunnableEach(RunnableEachBase[Input, Output]):
"""
A runnable that delegates calls to another runnable
with each element of the input sequence.
"""
def bind(self, **kwargs: Any) -> RunnableEach[Input, Output]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | return RunnableEach(bound=self.bound.bind(**kwargs))
def with_config(
self, config: Optional[RunnableConfig] = None, **kwargs: Any
) -> RunnableEach[Input, Output]:
return RunnableEach(bound=self.bound.with_config(config, **kwargs))
def with_listeners(
self,
*,
on_start: Optional[Listener] = None,
on_end: Optional[Listener] = None,
on_error: Optional[Listener] = None,
) -> RunnableEach[Input, Output]:
"""
Bind lifecycle listeners to a Runnable, returning a new Runnable.
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
"""
return RunnableEach(
bound=self.bound.with_listeners(
on_start=on_start, on_end=on_end, on_error=on_error
)
)
class RunnableBindingBase(RunnableSerializable[Input, Output]): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | """A runnable that delegates calls to another runnable with a set of kwargs.
Use only if creating a new RunnableBinding subclass with different __init__ args.
See documentation for RunnableBinding for more details.
"""
bound: Runnable[Input, Output]
"""The underlying runnable that this runnable delegates to."""
kwargs: Mapping[str, Any] = Field(default_factory=dict)
"""kwargs to pass to the underlying runnable when running.
For example, when the runnable binding is invoked the underlying
runnable will be invoked with the same input but with these additional
kwargs.
"""
config: RunnableConfig = Field(default_factory=dict)
"""The config to bind to the underlying runnable."""
config_factories: List[Callable[[RunnableConfig], RunnableConfig]] = Field(
default_factory=list
)
"""The config factories to bind to the underlying runnable."""
custom_input_type: Optional[Any] = None
"""Override the input type of the underlying runnable with a custom type.
The type can be a pydantic model, or a type annotation (e.g., `List[str]`).
"""
custom_output_type: Optional[Any] = None
"""Override the output type of the underlying runnable with a custom type.
The type can be a pydantic model, or a type annotation (e.g., `List[str]`).
"""
class Config: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | arbitrary_types_allowed = True
def __init__(
self,
*,
bound: Runnable[Input, Output],
kwargs: Optional[Mapping[str, Any]] = None,
config: Optional[RunnableConfig] = None,
config_factories: Optional[
List[Callable[[RunnableConfig], RunnableConfig]]
] = None,
custom_input_type: Optional[Union[Type[Input], BaseModel]] = None,
custom_output_type: Optional[Union[Type[Output], BaseModel]] = None,
**other_kwargs: Any,
) -> None:
"""Create a RunnableBinding from a runnable and kwargs.
Args:
bound: The underlying runnable that this runnable delegates calls to.
kwargs: optional kwargs to pass to the underlying runnable, when running
the underlying runnable (e.g., via `invoke`, `batch`, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | `transform`, or `stream` or async variants)
config: config_factories:
config_factories: optional list of config factories to apply to the
custom_input_type: Specify to override the input type of the underlying
runnable with a custom type.
custom_output_type: Specify to override the output type of the underlying
runnable with a custom type.
**other_kwargs: Unpacked into the base class.
"""
config = config or {}
if configurable := config.get("configurable", None):
allowed_keys = set(s.id for s in bound.config_specs)
for key in configurable:
if key not in allowed_keys:
raise ValueError(
f"Configurable key '{key}' not found in runnable with"
f" config keys: {allowed_keys}"
)
super().__init__(
bound=bound,
kwargs=kwargs or {},
config=config or {},
config_factories=config_factories or [],
custom_input_type=custom_input_type,
custom_output_type=custom_output_type,
**other_kwargs,
)
@property
def InputType(self) -> Type[Input]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | return (
cast(Type[Input], self.custom_input_type)
if self.custom_input_type is not None
else self.bound.InputType
)
@property
def OutputType(self) -> Type[Output]:
return (
cast(Type[Output], self.custom_output_type)
if self.custom_output_type is not None
else self.bound.OutputType
)
def get_input_schema(
self, config: Optional[RunnableConfig] = None
) -> Type[BaseModel]:
if self.custom_input_type is not None:
return super().get_input_schema(config)
return self.bound.get_input_schema(merge_configs(self.config, config))
def get_output_schema(
self, config: Optional[RunnableConfig] = None
) -> Type[BaseModel]:
if self.custom_output_type is not None:
return super().get_output_schema(config)
return self.bound.get_output_schema(merge_configs(self.config, config))
@property
def config_specs(self) -> List[ConfigurableFieldSpec]:
return self.bound.config_specs
@classmethod
def is_lc_serializable(cls) -> bool: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | return True
@classmethod
def get_lc_namespace(cls) -> List[str]:
return cls.__module__.split(".")[:-1]
def _merge_configs(self, *configs: Optional[RunnableConfig]) -> RunnableConfig:
config = merge_configs(self.config, *configs)
return merge_configs(config, *(f(config) for f in self.config_factories))
def invoke(
self,
input: Input,
config: Optional[RunnableConfig] = None,
**kwargs: Optional[Any],
) -> Output:
return self.bound.invoke(
input,
self._merge_configs(config),
**{**self.kwargs, **kwargs},
)
async def ainvoke(
self,
input: Input,
config: Optional[RunnableConfig] = None,
**kwargs: Optional[Any],
) -> Output:
return await self.bound.ainvoke(
input,
self._merge_configs(config),
**{**self.kwargs, **kwargs},
)
def batch( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | self,
inputs: List[Input],
config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None,
*,
return_exceptions: bool = False,
**kwargs: Optional[Any],
) -> List[Output]:
if isinstance(config, list):
configs = cast(
List[RunnableConfig],
[self._merge_configs(conf) for conf in config],
)
else:
configs = [self._merge_configs(config) for _ in range(len(inputs))]
return self.bound.batch(
inputs,
configs,
return_exceptions=return_exceptions,
**{**self.kwargs, **kwargs},
)
async def abatch( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | self,
inputs: List[Input],
config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None,
*,
return_exceptions: bool = False,
**kwargs: Optional[Any],
) -> List[Output]:
if isinstance(config, list):
configs = cast(
List[RunnableConfig],
[self._merge_configs(conf) for conf in config],
)
else:
configs = [self._merge_configs(config) for _ in range(len(inputs))]
return await self.bound.abatch(
inputs,
configs,
return_exceptions=return_exceptions,
**{**self.kwargs, **kwargs},
)
def stream( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | self,
input: Input,
config: Optional[RunnableConfig] = None,
**kwargs: Optional[Any],
) -> Iterator[Output]:
yield from self.bound.stream(
input,
self._merge_configs(config),
**{**self.kwargs, **kwargs},
)
async def astream( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | self,
input: Input,
config: Optional[RunnableConfig] = None,
**kwargs: Optional[Any],
) -> AsyncIterator[Output]:
async for item in self.bound.astream(
input,
self._merge_configs(config),
**{**self.kwargs, **kwargs},
):
yield item
def transform(
self,
input: Iterator[Input],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Iterator[Output]:
yield from self.bound.transform(
input,
self._merge_configs(config),
**{**self.kwargs, **kwargs},
)
async def atransform( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | self,
input: AsyncIterator[Input],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> AsyncIterator[Output]:
async for item in self.bound.atransform(
input,
self._merge_configs(config),
**{**self.kwargs, **kwargs},
):
yield item
RunnableBindingBase.update_forward_refs(RunnableConfig=RunnableConfig)
class RunnableBinding(RunnableBindingBase[Input, Output]):
"""Wrap a runnable with additional functionality.
A RunnableBinding can be thought of as a "runnable decorator" that
preserves the essential features of Runnable; i.e., batching, streaming,
and async support, while adding additional functionality.
Any class that inherits from Runnable can be bound to a `RunnableBinding`.
Runnables expose a standard set of methods for creating `RunnableBindings`
or sub-classes of `RunnableBindings` (e.g., `RunnableRetry`, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | `RunnableWithFallbacks`) that add additional functionality.
These methods include:
- `bind`: Bind kwargs to pass to the underlying runnable when running it.
- `with_config`: Bind config to pass to the underlying runnable when running it.
- `with_listeners`: Bind lifecycle listeners to the underlying runnable.
- `with_types`: Override the input and output types of the underlying runnable.
- `with_retry`: Bind a retry policy to the underlying runnable.
- `with_fallbacks`: Bind a fallback policy to the underlying runnable.
Example:
`bind`: Bind kwargs to pass to the underlying runnable when running it.
.. code-block:: python
# Create a runnable binding that invokes the ChatModel with the
# additional kwarg `stop=['-']` when running it.
from langchain.chat_models import ChatOpenAI
model = ChatOpenAI()
model.invoke('Say "Parrot-MAGIC"', stop=['-']) # Should return `Parrot`
# Using it the easy way via `bind` method which returns a new
# RunnableBinding
runnable_binding = model.bind(stop=['-'])
runnable_binding.invoke('Say "Parrot-MAGIC"') # Should return `Parrot`
Can also be done by instantiating a RunnableBinding directly (not recommended):
.. code-block:: python
from langchain.schema.runnable import RunnableBinding
runnable_binding = RunnableBinding(
bound=model,
kwargs={'stop': ['-']} # <-- Note the additional kwargs
)
runnable_binding.invoke('Say "Parrot-MAGIC"') # Should return `Parrot`
"""
def bind(self, **kwargs: Any) -> Runnable[Input, Output]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | """Bind additional kwargs to a Runnable, returning a new Runnable.
Args:
**kwargs: The kwargs to bind to the Runnable.
Returns:
A new Runnable with the same type and config as the original,
but with the additional kwargs bound.
"""
return self.__class__(
bound=self.bound,
config=self.config,
kwargs={**self.kwargs, **kwargs},
custom_input_type=self.custom_input_type,
custom_output_type=self.custom_output_type,
)
def with_config( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | self,
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Runnable[Input, Output]:
return self.__class__(
bound=self.bound,
kwargs=self.kwargs,
config=cast(RunnableConfig, {**self.config, **(config or {}), **kwargs}),
custom_input_type=self.custom_input_type,
custom_output_type=self.custom_output_type,
)
def with_listeners(
self,
*,
on_start: Optional[Listener] = None,
on_end: Optional[Listener] = None,
on_error: Optional[Listener] = None,
) -> Runnable[Input, Output]:
"""Bind lifecycle listeners to a Runnable, returning a new Runnable. |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | Args:
on_start: Called before the runnable starts running, with the Run object.
on_end: Called after the runnable finishes running, with the Run object.
on_error: Called if the runnable throws an error, with the Run object.
Returns:
The Run object contains information about the run, including its id,
type, input, output, error, start_time, end_time, and any tags or metadata
added to the run.
"""
from langchain_core.tracers.root_listeners import RootListenersTracer
return self.__class__(
bound=self.bound,
kwargs=self.kwargs,
config=self.config,
config_factories=[
lambda config: {
"callbacks": [
RootListenersTracer(
config=config,
on_start=on_start,
on_end=on_end,
on_error=on_error,
)
],
}
],
custom_input_type=self.custom_input_type,
custom_output_type=self.custom_output_type,
)
def with_types( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | self,
input_type: Optional[Union[Type[Input], BaseModel]] = None,
output_type: Optional[Union[Type[Output], BaseModel]] = None,
) -> Runnable[Input, Output]:
return self.__class__(
bound=self.bound,
kwargs=self.kwargs,
config=self.config,
custom_input_type=input_type
if input_type is not None
else self.custom_input_type,
custom_output_type=output_type
if output_type is not None
else self.custom_output_type,
)
def with_retry(self, **kwargs: Any) -> Runnable[Input, Output]:
return self.__class__(
bound=self.bound.with_retry(**kwargs),
kwargs=self.kwargs,
config=self.config,
)
RunnableLike = Union[
Runnable[Input, Output],
Callable[[Input], Output],
Callable[[Input], Awaitable[Output]],
Callable[[Iterator[Input]], Iterator[Output]],
Callable[[AsyncIterator[Input]], AsyncIterator[Output]],
Mapping[str, Any],
]
def coerce_to_runnable(thing: RunnableLike) -> Runnable[Input, Output]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/langchain_core/runnables/base.py | """Coerce a runnable-like object into a Runnable.
Args:
thing: A runnable-like object.
Returns:
A Runnable.
"""
if isinstance(thing, Runnable):
return thing
elif inspect.isasyncgenfunction(thing) or inspect.isgeneratorfunction(thing):
return RunnableGenerator(thing)
elif callable(thing):
return RunnableLambda(cast(Callable[[Input], Output], thing))
elif isinstance(thing, dict):
return cast(Runnable[Input, Output], RunnableParallel(thing))
else:
raise TypeError(
f"Expected a Runnable, callable or dict."
f"Instead got an unsupported type: {type(thing)}"
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | import sys
from functools import partial
from operator import itemgetter
from typing import (
Any,
AsyncIterator,
Dict,
Iterator,
List,
Optional,
Sequence,
Union,
cast,
)
from uuid import UUID
import pytest
from freezegun import freeze_time |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | from pytest_mock import MockerFixture
from syrupy import SnapshotAssertion
from typing_extensions import TypedDict
from langchain_core.callbacks.manager import (
Callbacks,
atrace_as_chain_group,
trace_as_chain_group,
)
from langchain_core.documents import Document
from langchain_core.load import dumpd, dumps
from langchain_core.messages import (
AIMessage,
AIMessageChunk,
HumanMessage,
SystemMessage,
)
from langchain_core.output_parsers import (
BaseOutputParser,
CommaSeparatedListOutputParser,
StrOutputParser,
)
from langchain_core.prompt_values import ChatPromptValue, StringPromptValue
from langchain_core.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
PromptTemplate,
SystemMessagePromptTemplate,
)
from langchain_core.pydantic_v1 import BaseModel |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | from langchain_core.retrievers import BaseRetriever
from langchain_core.runnables import (
ConfigurableField,
ConfigurableFieldMultiOption,
ConfigurableFieldSingleOption,
RouterRunnable,
Runnable,
RunnableBinding,
RunnableBranch,
RunnableConfig,
RunnableGenerator,
RunnableLambda,
RunnableParallel,
RunnablePassthrough,
RunnableSequence,
RunnableWithFallbacks,
add,
)
from langchain_core.tools import BaseTool, tool
from langchain_core.tracers import (
BaseTracer,
ConsoleCallbackHandler,
Run,
RunLog,
RunLogPatch,
)
from langchain_core.tracers.context import collect_runs
from tests.unit_tests.fake.chat_model import FakeListChatModel
from tests.unit_tests.fake.llm import FakeListLLM, FakeStreamingListLLM
class FakeTracer(BaseTracer): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | """Fake tracer that records LangChain execution.
It replaces run ids with deterministic UUIDs for snapshotting."""
def __init__(self) -> None:
"""Initialize the tracer."""
super().__init__()
self.runs: List[Run] = []
self.uuids_map: Dict[UUID, UUID] = {}
self.uuids_generator = (
UUID(f"00000000-0000-4000-8000-{i:012}", version=4) for i in range(10000)
)
def _replace_uuid(self, uuid: UUID) -> UUID:
if uuid not in self.uuids_map:
self.uuids_map[uuid] = next(self.uuids_generator)
return self.uuids_map[uuid]
def _copy_run(self, run: Run) -> Run:
return run.copy(
update={
"id": self._replace_uuid(run.id),
"parent_run_id": self.uuids_map[run.parent_run_id]
if run.parent_run_id
else None,
"child_runs": [self._copy_run(child) for child in run.child_runs],
"execution_order": None,
"child_execution_order": None,
}
)
def _persist_run(self, run: Run) -> None:
"""Persist a run."""
self.runs.append(self._copy_run(run))
class FakeRunnable(Runnable[str, int]): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | def invoke(
self,
input: str,
config: Optional[RunnableConfig] = None,
) -> int:
return len(input)
class FakeRetriever(BaseRetriever):
def _get_relevant_documents(
self,
query: str,
*,
callbacks: Callbacks = None,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> List[Document]:
return [Document(page_content="foo"), Document(page_content="bar")]
async def _aget_relevant_documents( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | self,
query: str,
*,
callbacks: Callbacks = None,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> List[Document]:
return [Document(page_content="foo"), Document(page_content="bar")]
def test_schemas(snapshot: SnapshotAssertion) -> None:
fake = FakeRunnable()
assert fake.input_schema.schema() == {
"title": "FakeRunnableInput",
"type": "string",
}
assert fake.output_schema.schema() == {
"title": "FakeRunnableOutput",
"type": "integer",
}
assert fake.config_schema(include=["tags", "metadata", "run_name"]).schema() == {
"title": "FakeRunnableConfig", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | "type": "object",
"properties": {
"metadata": {"title": "Metadata", "type": "object"},
"run_name": {"title": "Run Name", "type": "string"},
"tags": {"items": {"type": "string"}, "title": "Tags", "type": "array"},
},
}
fake_bound = FakeRunnable().bind(a="b")
assert fake_bound.input_schema.schema() == {
"title": "FakeRunnableInput",
"type": "string",
}
assert fake_bound.output_schema.schema() == {
"title": "FakeRunnableOutput",
"type": "integer",
}
fake_w_fallbacks = FakeRunnable().with_fallbacks((fake,))
assert fake_w_fallbacks.input_schema.schema() == {
"title": "FakeRunnableInput",
"type": "string",
}
assert fake_w_fallbacks.output_schema.schema() == {
"title": "FakeRunnableOutput",
"type": "integer",
}
def typed_lambda_impl(x: str) -> int:
return len(x)
typed_lambda = RunnableLambda(typed_lambda_impl)
assert typed_lambda.input_schema.schema() == {
"title": "RunnableLambdaInput", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | "type": "string",
}
assert typed_lambda.output_schema.schema() == {
"title": "RunnableLambdaOutput",
"type": "integer",
}
async def typed_async_lambda_impl(x: str) -> int:
return len(x)
typed_async_lambda: Runnable = RunnableLambda(typed_async_lambda_impl)
assert typed_async_lambda.input_schema.schema() == {
"title": "RunnableLambdaInput",
"type": "string",
}
assert typed_async_lambda.output_schema.schema() == {
"title": "RunnableLambdaOutput",
"type": "integer",
}
fake_ret = FakeRetriever()
assert fake_ret.input_schema.schema() == {
"title": "FakeRetrieverInput",
"type": "string",
}
assert fake_ret.output_schema.schema() == {
"title": "FakeRetrieverOutput",
"type": "array",
"items": {"$ref": "#/definitions/Document"},
"definitions": {
"Document": {
"title": "Document",
"description": "Class for storing a piece of text and associated metadata.", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | "type": "object",
"properties": {
"page_content": {"title": "Page Content", "type": "string"},
"metadata": {"title": "Metadata", "type": "object"},
"type": {
"title": "Type",
"enum": ["Document"],
"default": "Document",
"type": "string",
},
},
"required": ["page_content"],
}
},
}
fake_llm = FakeListLLM(responses=["a"])
assert fake_llm.input_schema.schema() == snapshot
assert fake_llm.output_schema.schema() == {
"title": "FakeListLLMOutput",
"type": "string",
}
fake_chat = FakeListChatModel(responses=["a"])
assert fake_chat.input_schema.schema() == snapshot
assert fake_chat.output_schema.schema() == snapshot
chat_prompt = ChatPromptTemplate.from_messages(
[
MessagesPlaceholder(variable_name="history"),
("human", "Hello, how are you?"),
]
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | assert chat_prompt.input_schema.schema() == {
"title": "PromptInput",
"type": "object",
"properties": {
"history": {
"title": "History",
"type": "array",
"items": {
"anyOf": [
{"$ref": "#/definitions/AIMessage"},
{"$ref": "#/definitions/HumanMessage"},
{"$ref": "#/definitions/ChatMessage"},
{"$ref": "#/definitions/SystemMessage"},
{"$ref": "#/definitions/FunctionMessage"},
{"$ref": "#/definitions/ToolMessage"},
]
},
}
},
"definitions": {
"AIMessage": {
"title": "AIMessage",
"description": "A Message from an AI.",
"type": "object",
"properties": {
"content": {
"title": "Content",
"anyOf": [
{"type": "string"},
{ |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | "type": "array",
"items": {
"anyOf": [{"type": "string"}, {"type": "object"}]
},
},
],
},
"additional_kwargs": {
"title": "Additional Kwargs",
"type": "object",
},
"type": {
"title": "Type",
"default": "ai",
"enum": ["ai"],
"type": "string",
},
"example": {
"title": "Example",
"default": False,
"type": "boolean",
},
},
"required": ["content"],
},
"HumanMessage": {
"title": "HumanMessage",
"description": "A Message from a human.",
"type": "object",
"properties": { |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | "content": {
"title": "Content",
"anyOf": [
{"type": "string"},
{
"type": "array",
"items": {
"anyOf": [{"type": "string"}, {"type": "object"}]
},
},
],
},
"additional_kwargs": {
"title": "Additional Kwargs",
"type": "object",
},
"type": {
"title": "Type",
"default": "human",
"enum": ["human"],
"type": "string",
},
"example": {
"title": "Example",
"default": False,
"type": "boolean",
},
},
"required": ["content"],
}, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | "ChatMessage": {
"title": "ChatMessage",
"description": "A Message that can be assigned an arbitrary speaker (i.e. role).",
"type": "object",
"properties": {
"content": {
"title": "Content",
"anyOf": [
{"type": "string"},
{
"type": "array",
"items": {
"anyOf": [{"type": "string"}, {"type": "object"}]
},
},
],
},
"additional_kwargs": {
"title": "Additional Kwargs",
"type": "object",
},
"type": {
"title": "Type",
"default": "chat",
"enum": ["chat"],
"type": "string",
},
"role": {"title": "Role", "type": "string"},
},
"required": ["content", "role"], |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | },
"SystemMessage": {
"title": "SystemMessage",
"description": "A Message for priming AI behavior, usually passed in as the first of a sequence\nof input messages.",
"type": "object",
"properties": {
"content": {
"title": "Content",
"anyOf": [
{"type": "string"},
{
"type": "array",
"items": {
"anyOf": [{"type": "string"}, {"type": "object"}]
},
},
],
},
"additional_kwargs": {
"title": "Additional Kwargs",
"type": "object",
},
"type": {
"title": "Type",
"default": "system",
"enum": ["system"],
"type": "string",
},
},
"required": ["content"], |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | },
"FunctionMessage": {
"title": "FunctionMessage",
"description": "A Message for passing the result of executing a function back to a model.",
"type": "object",
"properties": {
"content": {
"title": "Content",
"anyOf": [
{"type": "string"},
{
"type": "array",
"items": {
"anyOf": [{"type": "string"}, {"type": "object"}]
},
},
],
},
"additional_kwargs": {
"title": "Additional Kwargs",
"type": "object",
},
"type": {
"title": "Type",
"default": "function",
"enum": ["function"],
"type": "string",
},
"name": {"title": "Name", "type": "string"},
}, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | "required": ["content", "name"],
},
"ToolMessage": {
"title": "ToolMessage",
"description": "A Message for passing the result of executing a tool back to a model.",
"type": "object",
"properties": {
"content": {
"title": "Content",
"anyOf": [
{"type": "string"},
{
"type": "array",
"items": {
"anyOf": [{"type": "string"}, {"type": "object"}]
},
},
],
},
"additional_kwargs": {
"title": "Additional Kwargs",
"type": "object",
},
"type": {
"title": "Type",
"default": "tool",
"enum": ["tool"],
"type": "string",
},
"tool_call_id": {"title": "Tool Call Id", "type": "string"}, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | },
"required": ["content", "tool_call_id"],
},
},
}
assert chat_prompt.output_schema.schema() == snapshot
prompt = PromptTemplate.from_template("Hello, {name}!")
assert prompt.input_schema.schema() == {
"title": "PromptInput",
"type": "object",
"properties": {"name": {"title": "Name", "type": "string"}},
}
assert prompt.output_schema.schema() == snapshot
prompt_mapper = PromptTemplate.from_template("Hello, {name}!").map()
assert prompt_mapper.input_schema.schema() == {
"definitions": {
"PromptInput": {
"properties": {"name": {"title": "Name", "type": "string"}},
"title": "PromptInput",
"type": "object",
}
},
"items": {"$ref": "#/definitions/PromptInput"},
"type": "array",
"title": "RunnableEachInput",
}
assert prompt_mapper.output_schema.schema() == snapshot
list_parser = CommaSeparatedListOutputParser()
assert list_parser.input_schema.schema() == snapshot
assert list_parser.output_schema.schema() == { |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | "title": "CommaSeparatedListOutputParserOutput",
"type": "array",
"items": {"type": "string"},
}
seq = prompt | fake_llm | list_parser
assert seq.input_schema.schema() == {
"title": "PromptInput",
"type": "object",
"properties": {"name": {"title": "Name", "type": "string"}},
}
assert seq.output_schema.schema() == {
"type": "array",
"items": {"type": "string"},
"title": "CommaSeparatedListOutputParserOutput",
}
router: Runnable = RouterRunnable({})
assert router.input_schema.schema() == {
"title": "RouterRunnableInput",
"$ref": "#/definitions/RouterInput",
"definitions": {
"RouterInput": {
"title": "RouterInput",
"type": "object",
"properties": {
"key": {"title": "Key", "type": "string"},
"input": {"title": "Input"},
},
"required": ["key", "input"],
}
}, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | }
assert router.output_schema.schema() == {"title": "RouterRunnableOutput"}
seq_w_map: Runnable = (
prompt
| fake_llm
| {
"original": RunnablePassthrough(input_type=str),
"as_list": list_parser,
"length": typed_lambda_impl,
}
)
assert seq_w_map.input_schema.schema() == {
"title": "PromptInput",
"type": "object",
"properties": {"name": {"title": "Name", "type": "string"}},
}
assert seq_w_map.output_schema.schema() == {
"title": "RunnableParallelOutput",
"type": "object",
"properties": {
"original": {"title": "Original", "type": "string"},
"length": {"title": "Length", "type": "integer"},
"as_list": {
"title": "As List",
"type": "array",
"items": {"type": "string"},
},
},
}
def test_passthrough_assign_schema() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | retriever = FakeRetriever()
prompt = PromptTemplate.from_template("{context} {question}")
fake_llm = FakeListLLM(responses=["a"])
seq_w_assign: Runnable = (
RunnablePassthrough.assign(context=itemgetter("question") | retriever)
| prompt
| fake_llm
)
assert seq_w_assign.input_schema.schema() == {
"properties": {"question": {"title": "Question", "type": "string"}},
"title": "RunnableSequenceInput",
"type": "object",
}
assert seq_w_assign.output_schema.schema() == {
"title": "FakeListLLMOutput",
"type": "string",
}
invalid_seq_w_assign: Runnable = (
RunnablePassthrough.assign(context=itemgetter("question") | retriever)
| fake_llm
)
assert invalid_seq_w_assign.input_schema.schema() == { |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | "properties": {"question": {"title": "Question"}},
"title": "RunnableParallelInput",
"type": "object",
}
@pytest.mark.skipif(
sys.version_info < (3, 9), reason="Requires python version >= 3.9 to run."
)
def test_lambda_schemas() -> None:
first_lambda = lambda x: x["hello"]
assert RunnableLambda(first_lambda).input_schema.schema() == {
"title": "RunnableLambdaInput",
"type": "object",
"properties": {"hello": {"title": "Hello"}},
}
second_lambda = lambda x, y: (x["hello"], x["bye"], y["bah"])
assert RunnableLambda(
second_lambda,
).input_schema.schema() == {
"title": "RunnableLambdaInput",
"type": "object",
"properties": {"hello": {"title": "Hello"}, "bye": {"title": "Bye"}},
}
def get_value(input):
return input["variable_name"]
assert RunnableLambda(get_value).input_schema.schema() == {
"title": "RunnableLambdaInput",
"type": "object",
"properties": {"variable_name": {"title": "Variable Name"}},
}
async def aget_value(input): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | return (input["variable_name"], input.get("another"))
assert RunnableLambda(aget_value).input_schema.schema() == {
"title": "RunnableLambdaInput",
"type": "object",
"properties": {
"another": {"title": "Another"},
"variable_name": {"title": "Variable Name"},
},
}
async def aget_values(input):
return {
"hello": input["variable_name"],
"bye": input["variable_name"],
"byebye": input["yo"],
}
assert RunnableLambda(aget_values).input_schema.schema() == {
"title": "RunnableLambdaInput",
"type": "object",
"properties": {
"variable_name": {"title": "Variable Name"},
"yo": {"title": "Yo"},
},
}
class InputType(TypedDict): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | variable_name: str
yo: int
class OutputType(TypedDict):
hello: str
bye: str
byebye: int
async def aget_values_typed(input: InputType) -> OutputType:
return {
"hello": input["variable_name"],
"bye": input["variable_name"],
"byebye": input["yo"],
}
assert (
RunnableLambda(aget_values_typed).input_schema.schema()
== {
"title": "RunnableLambdaInput",
"$ref": "#/definitions/InputType",
"definitions": {
"InputType": {
"properties": { |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | "variable_name": {
"title": "Variable " "Name",
"type": "string",
},
"yo": {"title": "Yo", "type": "integer"},
},
"required": ["variable_name", "yo"],
"title": "InputType",
"type": "object",
}
},
}
)
assert RunnableLambda(aget_values_typed).output_schema.schema() == {
"title": "RunnableLambdaOutput",
"$ref": "#/definitions/OutputType",
"definitions": {
"OutputType": {
"properties": {
"bye": {"title": "Bye", "type": "string"},
"byebye": {"title": "Byebye", "type": "integer"},
"hello": {"title": "Hello", "type": "string"},
},
"required": ["hello", "bye", "byebye"],
"title": "OutputType",
"type": "object",
}
},
}
def test_with_types_with_type_generics() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | """Verify that with_types works if we use things like List[int]"""
def foo(x: int) -> None:
"""Add one to the input."""
raise NotImplementedError()
RunnableLambda(foo).with_types(
output_type=List[int],
input_type=List[int],
)
RunnableLambda(foo).with_types(
output_type=Sequence[int],
input_type=Sequence[int],
)
def test_schema_complex_seq() -> None:
prompt1 = ChatPromptTemplate.from_template("what is the city {person} is from?")
prompt2 = ChatPromptTemplate.from_template(
"what country is the city {city} in? respond in {language}"
)
model = FakeListChatModel(responses=[""])
chain1 = prompt1 | model | StrOutputParser()
chain2: Runnable = (
{"city": chain1, "language": itemgetter("language")}
| prompt2
| model
| StrOutputParser()
)
assert chain2.input_schema.schema() == {
"title": "RunnableParallelInput",
"type": "object", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | "properties": {
"person": {"title": "Person", "type": "string"},
"language": {"title": "Language"},
},
}
assert chain2.output_schema.schema() == {
"title": "StrOutputParserOutput",
"type": "string",
}
assert chain2.with_types(input_type=str).input_schema.schema() == {
"title": "RunnableBindingInput",
"type": "string",
}
assert chain2.with_types(input_type=int).output_schema.schema() == {
"title": "StrOutputParserOutput",
"type": "string",
}
class InputType(BaseModel):
person: str
assert chain2.with_types(input_type=InputType).input_schema.schema() == {
"title": "InputType",
"type": "object",
"properties": {"person": {"title": "Person", "type": "string"}},
"required": ["person"],
}
def test_configurable_fields() -> None:
fake_llm = FakeListLLM(responses=["a"])
assert fake_llm.invoke("...") == "a"
fake_llm_configurable = fake_llm.configurable_fields(
responses=ConfigurableField( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | id="llm_responses",
name="LLM Responses",
description="A list of fake responses for this LLM",
)
)
assert fake_llm_configurable.invoke("...") == "a"
assert fake_llm_configurable.config_schema().schema() == {
"title": "RunnableConfigurableFieldsConfig",
"type": "object",
"properties": {"configurable": {"$ref": "#/definitions/Configurable"}},
"definitions": {
"Configurable": {
"title": "Configurable",
"type": "object",
"properties": {
"llm_responses": {
"title": "LLM Responses",
"description": "A list of fake responses for this LLM",
"default": ["a"],
"type": "array",
"items": {"type": "string"},
}
},
}
},
}
fake_llm_configured = fake_llm_configurable.with_config(
configurable={"llm_responses": ["b"]}
)
assert fake_llm_configured.invoke("...") == "b" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | prompt = PromptTemplate.from_template("Hello, {name}!")
assert prompt.invoke({"name": "John"}) == StringPromptValue(text="Hello, John!")
prompt_configurable = prompt.configurable_fields(
template=ConfigurableField(
id="prompt_template",
name="Prompt Template",
description="The prompt template for this chain",
)
)
assert prompt_configurable.invoke({"name": "John"}) == StringPromptValue(
text="Hello, John!"
)
assert prompt_configurable.config_schema().schema() == {
"title": "RunnableConfigurableFieldsConfig",
"type": "object",
"properties": {"configurable": {"$ref": "#/definitions/Configurable"}},
"definitions": {
"Configurable": {
"title": "Configurable",
"type": "object",
"properties": {
"prompt_template": {
"title": "Prompt Template",
"description": "The prompt template for this chain",
"default": "Hello, {name}!",
"type": "string",
}
},
}
}, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | }
prompt_configured = prompt_configurable.with_config(
configurable={"prompt_template": "Hello, {name}! {name}!"}
)
assert prompt_configured.invoke({"name": "John"}) == StringPromptValue(
text="Hello, John! John!"
)
assert prompt_configurable.with_config(
configurable={"prompt_template": "Hello {name} in {lang}"}
).input_schema.schema() == {
"title": "PromptInput",
"type": "object",
"properties": {
"lang": {"title": "Lang", "type": "string"},
"name": {"title": "Name", "type": "string"},
},
}
chain_configurable = prompt_configurable | fake_llm_configurable | StrOutputParser()
assert chain_configurable.invoke({"name": "John"}) == "a"
assert chain_configurable.config_schema().schema() == {
"title": "RunnableSequenceConfig",
"type": "object",
"properties": {"configurable": {"$ref": "#/definitions/Configurable"}},
"definitions": {
"Configurable": {
"title": "Configurable",
"type": "object",
"properties": {
"llm_responses": {
"title": "LLM Responses", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | "description": "A list of fake responses for this LLM",
"default": ["a"],
"type": "array",
"items": {"type": "string"},
},
"prompt_template": {
"title": "Prompt Template",
"description": "The prompt template for this chain",
"default": "Hello, {name}!",
"type": "string",
},
},
}
},
}
assert (
chain_configurable.with_config(
configurable={
"prompt_template": "A very good morning to you, {name} {lang}!",
"llm_responses": ["c"],
}
).invoke({"name": "John", "lang": "en"})
== "c"
)
assert chain_configurable.with_config(
configurable={
"prompt_template": "A very good morning to you, {name} {lang}!",
"llm_responses": ["c"],
}
).input_schema.schema() == { |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | "title": "PromptInput",
"type": "object",
"properties": {
"lang": {"title": "Lang", "type": "string"},
"name": {"title": "Name", "type": "string"},
},
}
chain_with_map_configurable: Runnable = prompt_configurable | {
"llm1": fake_llm_configurable | StrOutputParser(),
"llm2": fake_llm_configurable | StrOutputParser(),
"llm3": fake_llm.configurable_fields(
responses=ConfigurableField("other_responses")
)
| StrOutputParser(),
}
assert chain_with_map_configurable.invoke({"name": "John"}) == {
"llm1": "a",
"llm2": "a",
"llm3": "a",
}
assert chain_with_map_configurable.config_schema().schema() == {
"title": "RunnableSequenceConfig",
"type": "object",
"properties": {"configurable": {"$ref": "#/definitions/Configurable"}},
"definitions": {
"Configurable": {
"title": "Configurable",
"type": "object",
"properties": {
"llm_responses": { |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | "title": "LLM Responses",
"description": "A list of fake responses for this LLM",
"default": ["a"],
"type": "array",
"items": {"type": "string"},
},
"other_responses": {
"title": "Other Responses",
"default": ["a"],
"type": "array",
"items": {"type": "string"},
},
"prompt_template": {
"title": "Prompt Template",
"description": "The prompt template for this chain",
"default": "Hello, {name}!",
"type": "string",
},
},
}
},
}
assert chain_with_map_configurable.with_config(
configurable={
"prompt_template": "A very good morning to you, {name}!",
"llm_responses": ["c"],
"other_responses": ["d"],
}
).invoke({"name": "John"}) == {"llm1": "c", "llm2": "c", "llm3": "d"}
def test_configurable_alts_factory() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | fake_llm = FakeListLLM(responses=["a"]).configurable_alternatives(
ConfigurableField(id="llm", name="LLM"),
chat=partial(FakeListLLM, responses=["b"]),
)
assert fake_llm.invoke("...") == "a"
assert fake_llm.with_config(configurable={"llm": "chat"}).invoke("...") == "b"
def test_configurable_fields_prefix_keys() -> None:
fake_chat = FakeListChatModel(responses=["b"]).configurable_fields(
responses=ConfigurableFieldMultiOption(
id="responses",
name="Chat Responses",
options={
"hello": "A good morning to you!",
"bye": "See you later!",
"helpful": "How can I help you?",
},
default=["hello", "bye"],
),
sleep=ConfigurableField(
id="chat_sleep",
is_shared=True,
),
)
fake_llm = (
FakeListLLM(responses=["a"])
.configurable_fields( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | responses=ConfigurableField(
id="responses",
name="LLM Responses",
description="A list of fake responses for this LLM",
)
)
.configurable_alternatives(
ConfigurableField(id="llm", name="LLM"),
chat=fake_chat | StrOutputParser(),
prefix_keys=True,
)
)
prompt = PromptTemplate.from_template("Hello, {name}!").configurable_fields(
template=ConfigurableFieldSingleOption(
id="prompt_template",
name="Prompt Template",
description="The prompt template for this chain",
options={
"hello": "Hello, {name}!",
"good_morning": "A very good morning to you, {name}!",
},
default="hello",
)
)
chain = prompt | fake_llm
assert chain.config_schema().schema() == {
"title": "RunnableSequenceConfig",
"type": "object",
"properties": {"configurable": {"$ref": "#/definitions/Configurable"}},
"definitions": { |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | "LLM": {
"title": "LLM",
"description": "An enumeration.",
"enum": ["chat", "default"],
"type": "string",
},
"Chat_Responses": {
"title": "Chat Responses",
"description": "An enumeration.",
"enum": ["hello", "bye", "helpful"],
"type": "string",
},
"Prompt_Template": {
"title": "Prompt Template",
"description": "An enumeration.",
"enum": ["hello", "good_morning"],
"type": "string",
},
"Configurable": {
"title": "Configurable",
"type": "object",
"properties": {
"prompt_template": {
"title": "Prompt Template",
"description": "The prompt template for this chain",
"default": "hello",
"allOf": [{"$ref": "#/definitions/Prompt_Template"}],
},
"llm": {
"title": "LLM", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | "default": "default",
"allOf": [{"$ref": "#/definitions/LLM"}],
},
"chat_sleep": {
"title": "Chat Sleep",
"type": "number",
},
"llm==chat/responses": {
"title": "Chat Responses",
"default": ["hello", "bye"],
"type": "array",
"items": {"$ref": "#/definitions/Chat_Responses"},
},
"llm==default/responses": {
"title": "LLM Responses",
"description": "A list of fake responses for this LLM",
"default": ["a"],
"type": "array",
"items": {"type": "string"},
},
},
},
},
}
def test_configurable_fields_example() -> None:
fake_chat = FakeListChatModel(responses=["b"]).configurable_fields(
responses=ConfigurableFieldMultiOption( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | id="chat_responses",
name="Chat Responses",
options={
"hello": "A good morning to you!",
"bye": "See you later!",
"helpful": "How can I help you?",
},
default=["hello", "bye"],
)
)
fake_llm = (
FakeListLLM(responses=["a"])
.configurable_fields(
responses=ConfigurableField(
id="llm_responses",
name="LLM Responses",
description="A list of fake responses for this LLM",
)
)
.configurable_alternatives(
ConfigurableField(id="llm", name="LLM"),
chat=fake_chat | StrOutputParser(),
)
)
prompt = PromptTemplate.from_template("Hello, {name}!").configurable_fields(
template=ConfigurableFieldSingleOption(
id="prompt_template",
name="Prompt Template",
description="The prompt template for this chain",
options={ |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | "hello": "Hello, {name}!",
"good_morning": "A very good morning to you, {name}!",
},
default="hello",
)
)
chain_configurable = prompt | fake_llm | (lambda x: {"name": x}) | prompt | fake_llm
assert chain_configurable.invoke({"name": "John"}) == "a"
assert chain_configurable.config_schema().schema() == {
"title": "RunnableSequenceConfig",
"type": "object",
"properties": {"configurable": {"$ref": "#/definitions/Configurable"}},
"definitions": {
"LLM": {
"title": "LLM",
"description": "An enumeration.",
"enum": ["chat", "default"],
"type": "string",
},
"Chat_Responses": {
"description": "An enumeration.",
"enum": ["hello", "bye", "helpful"],
"title": "Chat Responses",
"type": "string",
},
"Prompt_Template": {
"description": "An enumeration.",
"enum": ["hello", "good_morning"],
"title": "Prompt Template", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | "type": "string",
},
"Configurable": {
"title": "Configurable",
"type": "object",
"properties": {
"chat_responses": {
"default": ["hello", "bye"],
"items": {"$ref": "#/definitions/Chat_Responses"},
"title": "Chat Responses",
"type": "array",
},
"llm": {
"title": "LLM",
"default": "default",
"allOf": [{"$ref": "#/definitions/LLM"}],
},
"llm_responses": {
"title": "LLM Responses",
"description": "A list of fake responses for this LLM",
"default": ["a"],
"type": "array",
"items": {"type": "string"},
},
"prompt_template": {
"title": "Prompt Template",
"description": "The prompt template for this chain",
"default": "hello",
"allOf": [{"$ref": "#/definitions/Prompt_Template"}],
}, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | },
},
},
}
with pytest.raises(ValueError):
chain_configurable.with_config(configurable={"llm123": "chat"})
assert (
chain_configurable.with_config(configurable={"llm": "chat"}).invoke(
{"name": "John"}
)
== "A good morning to you!"
)
assert (
chain_configurable.with_config(
configurable={"llm": "chat", "chat_responses": ["helpful"]}
).invoke({"name": "John"})
== "How can I help you?"
)
async def test_passthrough_tap_async(mocker: MockerFixture) -> None:
fake = FakeRunnable()
mock = mocker.Mock()
seq: Runnable = fake | RunnablePassthrough(mock)
assert await seq.ainvoke("hello") == 5
assert mock.call_args_list == [mocker.call(5)]
mock.reset_mock()
assert [
part async for part in seq.astream("hello", dict(metadata={"key": "value"}))
] == [5]
assert mock.call_args_list == [mocker.call(5)]
mock.reset_mock() |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | assert seq.invoke("hello") == 5
assert mock.call_args_list == [mocker.call(5)]
mock.reset_mock()
assert [part for part in seq.stream("hello", dict(metadata={"key": "value"}))] == [
5
]
assert mock.call_args_list == [mocker.call(5)]
mock.reset_mock()
async def test_with_config(mocker: MockerFixture) -> None:
fake = FakeRunnable()
spy = mocker.spy(fake, "invoke")
assert fake.with_config(tags=["a-tag"]).invoke("hello") == 5
assert spy.call_args_list == [
mocker.call("hello", dict(tags=["a-tag"])),
]
spy.reset_mock()
fake_1: Runnable = RunnablePassthrough()
fake_2: Runnable = RunnablePassthrough()
spy_seq_step = mocker.spy(fake_1.__class__, "invoke")
sequence = fake_1.with_config(tags=["a-tag"]) | fake_2.with_config(
tags=["b-tag"], max_concurrency=5
)
assert sequence.invoke("hello") == "hello"
assert len(spy_seq_step.call_args_list) == 2
for i, call in enumerate(spy_seq_step.call_args_list):
assert call.args[1] == "hello"
if i == 0:
assert call.args[2].get("tags") == ["a-tag"]
assert call.args[2].get("max_concurrency") is None
else: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | assert call.args[2].get("tags") == ["b-tag"]
assert call.args[2].get("max_concurrency") == 5
mocker.stop(spy_seq_step)
assert [
*fake.with_config(tags=["a-tag"]).stream(
"hello", dict(metadata={"key": "value"})
)
] == [5]
assert spy.call_args_list == [
mocker.call("hello", dict(tags=["a-tag"], metadata={"key": "value"})),
]
spy.reset_mock()
assert fake.with_config(recursion_limit=5).batch(
["hello", "wooorld"], [dict(tags=["a-tag"]), dict(metadata={"key": "value"})]
) == [5, 7]
assert len(spy.call_args_list) == 2
for i, call in enumerate(
sorted(spy.call_args_list, key=lambda x: 0 if x.args[0] == "hello" else 1)
):
assert call.args[0] == ("hello" if i == 0 else "wooorld")
if i == 0:
assert call.args[1].get("recursion_limit") == 5
assert call.args[1].get("tags") == ["a-tag"]
assert call.args[1].get("metadata") == {}
else:
assert call.args[1].get("recursion_limit") == 5
assert call.args[1].get("tags") == []
assert call.args[1].get("metadata") == {"key": "value"}
spy.reset_mock()
assert fake.with_config(metadata={"a": "b"}).batch( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | ["hello", "wooorld"], dict(tags=["a-tag"])
) == [5, 7]
assert len(spy.call_args_list) == 2
for i, call in enumerate(spy.call_args_list):
assert call.args[0] == ("hello" if i == 0 else "wooorld")
assert call.args[1].get("tags") == ["a-tag"]
assert call.args[1].get("metadata") == {"a": "b"}
spy.reset_mock()
handler = ConsoleCallbackHandler()
assert (
await fake.with_config(metadata={"a": "b"}).ainvoke(
"hello", config={"callbacks": [handler]}
)
== 5
)
assert spy.call_args_list == [
mocker.call("hello", dict(callbacks=[handler], metadata={"a": "b"})),
]
spy.reset_mock()
assert [
part async for part in fake.with_config(metadata={"a": "b"}).astream("hello")
] == [5]
assert spy.call_args_list == [
mocker.call("hello", dict(metadata={"a": "b"})),
]
spy.reset_mock()
assert await fake.with_config(recursion_limit=5, tags=["c"]).abatch(
["hello", "wooorld"], dict(metadata={"key": "value"})
) == [
5, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | 7,
]
assert spy.call_args_list == [
mocker.call(
"hello",
dict(
metadata={"key": "value"},
tags=["c"],
callbacks=None,
recursion_limit=5,
),
),
mocker.call(
"wooorld",
dict(
metadata={"key": "value"},
tags=["c"],
callbacks=None,
recursion_limit=5,
),
),
]
async def test_default_method_implementations(mocker: MockerFixture) -> None:
fake = FakeRunnable()
spy = mocker.spy(fake, "invoke")
assert fake.invoke("hello", dict(tags=["a-tag"])) == 5
assert spy.call_args_list == [
mocker.call("hello", dict(tags=["a-tag"])),
]
spy.reset_mock() |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | assert [*fake.stream("hello", dict(metadata={"key": "value"}))] == [5]
assert spy.call_args_list == [
mocker.call("hello", dict(metadata={"key": "value"})),
]
spy.reset_mock()
assert fake.batch(
["hello", "wooorld"], [dict(tags=["a-tag"]), dict(metadata={"key": "value"})]
) == [5, 7]
assert len(spy.call_args_list) == 2
for i, call in enumerate(spy.call_args_list):
assert call.args[0] == ("hello" if i == 0 else "wooorld")
if i == 0:
assert call.args[1].get("tags") == ["a-tag"]
assert call.args[1].get("metadata") == {}
else:
assert call.args[1].get("tags") == []
assert call.args[1].get("metadata") == {"key": "value"}
spy.reset_mock()
assert fake.batch(["hello", "wooorld"], dict(tags=["a-tag"])) == [5, 7]
assert len(spy.call_args_list) == 2
for i, call in enumerate(spy.call_args_list):
assert call.args[0] == ("hello" if i == 0 else "wooorld")
assert call.args[1].get("tags") == ["a-tag"]
assert call.args[1].get("metadata") == {}
spy.reset_mock()
assert await fake.ainvoke("hello", config={"callbacks": []}) == 5
assert spy.call_args_list == [
mocker.call("hello", dict(callbacks=[])),
]
spy.reset_mock() |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | assert [part async for part in fake.astream("hello")] == [5]
assert spy.call_args_list == [
mocker.call("hello", None),
]
spy.reset_mock()
assert await fake.abatch(["hello", "wooorld"], dict(metadata={"key": "value"})) == [
5,
7,
]
assert spy.call_args_list == [
mocker.call(
"hello",
dict(
metadata={"key": "value"},
tags=[],
callbacks=None,
recursion_limit=25,
),
),
mocker.call(
"wooorld",
dict(
metadata={"key": "value"},
tags=[],
callbacks=None,
recursion_limit=25,
),
),
]
async def test_prompt() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | prompt = ChatPromptTemplate.from_messages(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessagePromptTemplate.from_template("{question}"),
]
)
expected = ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"),
]
)
assert prompt.invoke({"question": "What is your name?"}) == expected
assert prompt.batch(
[
{"question": "What is your name?"},
{"question": "What is your favorite color?"},
]
) == [
expected,
ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your favorite color?"),
]
),
]
assert [*prompt.stream({"question": "What is your name?"})] == [expected]
assert await prompt.ainvoke({"question": "What is your name?"}) == expected
assert await prompt.abatch( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | [
{"question": "What is your name?"},
{"question": "What is your favorite color?"},
]
) == [
expected,
ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your favorite color?"),
]
),
]
assert [
part async for part in prompt.astream({"question": "What is your name?"})
] == [expected]
stream_log = [
part async for part in prompt.astream_log({"question": "What is your name?"})
]
assert len(stream_log[0].ops) == 1
assert stream_log[0].ops[0]["op"] == "replace"
assert stream_log[0].ops[0]["path"] == ""
assert stream_log[0].ops[0]["value"]["logs"] == {}
assert stream_log[0].ops[0]["value"]["final_output"] is None
assert stream_log[0].ops[0]["value"]["streamed_output"] == []
assert isinstance(stream_log[0].ops[0]["value"]["id"], str)
assert stream_log[1:] == [
RunLogPatch(
{
"op": "replace", |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | "path": "/final_output",
"value": ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"),
]
),
}
),
RunLogPatch({"op": "add", "path": "/streamed_output/-", "value": expected}),
]
stream_log_state = [
part
async for part in prompt.astream_log(
{"question": "What is your name?"}, diff=False
)
]
stream_log[0].ops[0]["value"]["id"] = "00000000-0000-0000-0000-000000000000"
stream_log_state[-1].ops[0]["value"]["id"] = "00000000-0000-0000-0000-000000000000"
stream_log_state[-1].state["id"] = "00000000-0000-0000-0000-000000000000"
assert stream_log_state[-1].ops == [op for chunk in stream_log for op in chunk.ops]
assert stream_log_state[-1] == RunLog(
*[op for chunk in stream_log for op in chunk.ops],
state={
"final_output": ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"), |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | ]
),
"id": "00000000-0000-0000-0000-000000000000",
"logs": {},
"streamed_output": [
ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"),
]
)
],
},
)
async with atrace_as_chain_group("a_group") as manager:
stream_log_nested = [
part
async for part in prompt.astream_log(
{"question": "What is your name?"}, config={"callbacks": manager}
)
]
assert len(stream_log_nested[0].ops) == 1
assert stream_log_nested[0].ops[0]["op"] == "replace"
assert stream_log_nested[0].ops[0]["path"] == ""
assert stream_log_nested[0].ops[0]["value"]["logs"] == {}
assert stream_log_nested[0].ops[0]["value"]["final_output"] is None
assert stream_log_nested[0].ops[0]["value"]["streamed_output"] == []
assert isinstance(stream_log_nested[0].ops[0]["value"]["id"], str)
assert stream_log_nested[1:] == [ |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | RunLogPatch(
{
"op": "replace",
"path": "/final_output",
"value": ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"),
]
),
}
),
RunLogPatch({"op": "add", "path": "/streamed_output/-", "value": expected}),
]
def test_prompt_template_params() -> None:
prompt = ChatPromptTemplate.from_template(
"Respond to the following question: {question}"
)
result = prompt.invoke(
{
"question": "test",
"topic": "test",
}
)
assert result == ChatPromptValue(
messages=[HumanMessage(content="Respond to the following question: test")]
)
with pytest.raises(KeyError):
prompt.invoke({})
def test_with_listeners(mocker: MockerFixture) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
chat = FakeListChatModel(responses=["foo"])
chain: Runnable = prompt | chat
mock_start = mocker.Mock()
mock_end = mocker.Mock()
chain.with_listeners(on_start=mock_start, on_end=mock_end).invoke(
{"question": "Who are you?"}
)
assert mock_start.call_count == 1
assert mock_start.call_args[0][0].name == "RunnableSequence"
assert mock_end.call_count == 1
mock_start.reset_mock()
mock_end.reset_mock()
with trace_as_chain_group("hello") as manager:
chain.with_listeners(on_start=mock_start, on_end=mock_end).invoke(
{"question": "Who are you?"}, {"callbacks": manager}
)
assert mock_start.call_count == 1
assert mock_start.call_args[0][0].name == "RunnableSequence"
assert mock_end.call_count == 1
async def test_with_listeners_async(mocker: MockerFixture) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
chat = FakeListChatModel(responses=["foo"])
chain: Runnable = prompt | chat
mock_start = mocker.Mock()
mock_end = mocker.Mock()
await chain.with_listeners(on_start=mock_start, on_end=mock_end).ainvoke(
{"question": "Who are you?"}
)
assert mock_start.call_count == 1
assert mock_start.call_args[0][0].name == "RunnableSequence"
assert mock_end.call_count == 1
mock_start.reset_mock()
mock_end.reset_mock()
async with atrace_as_chain_group("hello") as manager:
await chain.with_listeners(on_start=mock_start, on_end=mock_end).ainvoke(
{"question": "Who are you?"}, {"callbacks": manager}
)
assert mock_start.call_count == 1
assert mock_start.call_args[0][0].name == "RunnableSequence"
assert mock_end.call_count == 1
@freeze_time("2023-01-01")
def test_prompt_with_chat_model( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | mocker: MockerFixture, snapshot: SnapshotAssertion
) -> None:
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
chat = FakeListChatModel(responses=["foo"])
chain: Runnable = prompt | chat
assert repr(chain) == snapshot
assert isinstance(chain, RunnableSequence)
assert chain.first == prompt
assert chain.middle == []
assert chain.last == chat
assert dumps(chain, pretty=True) == snapshot
prompt_spy = mocker.spy(prompt.__class__, "invoke")
chat_spy = mocker.spy(chat.__class__, "invoke")
tracer = FakeTracer()
assert chain.invoke(
{"question": "What is your name?"}, dict(callbacks=[tracer])
) == AIMessage(content="foo")
assert prompt_spy.call_args.args[1] == {"question": "What is your name?"}
assert chat_spy.call_args.args[1] == ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"),
] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | )
assert tracer.runs == snapshot
mocker.stop(prompt_spy)
mocker.stop(chat_spy)
prompt_spy = mocker.spy(prompt.__class__, "batch")
chat_spy = mocker.spy(chat.__class__, "batch")
tracer = FakeTracer()
assert chain.batch(
[
{"question": "What is your name?"},
{"question": "What is your favorite color?"},
],
dict(callbacks=[tracer]),
) == [
AIMessage(content="foo"),
AIMessage(content="foo"),
]
assert prompt_spy.call_args.args[1] == [
{"question": "What is your name?"},
{"question": "What is your favorite color?"},
]
assert chat_spy.call_args.args[1] == [
ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"),
]
),
ChatPromptValue( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your favorite color?"),
]
),
]
assert (
len(
[
r
for r in tracer.runs
if r.parent_run_id is None and len(r.child_runs) == 2
]
)
== 2
), "Each of 2 outer runs contains exactly two inner runs (1 prompt, 1 chat)"
mocker.stop(prompt_spy)
mocker.stop(chat_spy)
prompt_spy = mocker.spy(prompt.__class__, "invoke")
chat_spy = mocker.spy(chat.__class__, "stream")
tracer = FakeTracer()
assert [
*chain.stream({"question": "What is your name?"}, dict(callbacks=[tracer]))
] == [
AIMessageChunk(content="f"),
AIMessageChunk(content="o"),
AIMessageChunk(content="o"),
]
assert prompt_spy.call_args.args[1] == {"question": "What is your name?"} |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | assert chat_spy.call_args.args[1] == ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"),
]
)
@freeze_time("2023-01-01")
async def test_prompt_with_chat_model_async(
mocker: MockerFixture, snapshot: SnapshotAssertion
) -> None:
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
chat = FakeListChatModel(responses=["foo"])
chain: Runnable = prompt | chat
assert repr(chain) == snapshot
assert isinstance(chain, RunnableSequence)
assert chain.first == prompt
assert chain.middle == []
assert chain.last == chat
assert dumps(chain, pretty=True) == snapshot
prompt_spy = mocker.spy(prompt.__class__, "ainvoke")
chat_spy = mocker.spy(chat.__class__, "ainvoke")
tracer = FakeTracer()
assert await chain.ainvoke(
{"question": "What is your name?"}, dict(callbacks=[tracer])
) == AIMessage(content="foo")
assert prompt_spy.call_args.args[1] == {"question": "What is your name?"} |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | assert chat_spy.call_args.args[1] == ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"),
]
)
assert tracer.runs == snapshot
mocker.stop(prompt_spy)
mocker.stop(chat_spy)
prompt_spy = mocker.spy(prompt.__class__, "abatch")
chat_spy = mocker.spy(chat.__class__, "abatch")
tracer = FakeTracer()
assert await chain.abatch(
[
{"question": "What is your name?"},
{"question": "What is your favorite color?"},
],
dict(callbacks=[tracer]),
) == [
AIMessage(content="foo"),
AIMessage(content="foo"),
]
assert prompt_spy.call_args.args[1] == [
{"question": "What is your name?"},
{"question": "What is your favorite color?"},
]
assert chat_spy.call_args.args[1] == [
ChatPromptValue(
messages=[ |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"),
]
),
ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your favorite color?"),
]
),
]
assert (
len(
[
r
for r in tracer.runs
if r.parent_run_id is None and len(r.child_runs) == 2
]
)
== 2
), "Each of 2 outer runs contains exactly two inner runs (1 prompt, 1 chat)"
mocker.stop(prompt_spy)
mocker.stop(chat_spy)
prompt_spy = mocker.spy(prompt.__class__, "ainvoke")
chat_spy = mocker.spy(chat.__class__, "astream")
tracer = FakeTracer()
assert [
a
async for a in chain.astream( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | {"question": "What is your name?"}, dict(callbacks=[tracer])
)
] == [
AIMessageChunk(content="f"),
AIMessageChunk(content="o"),
AIMessageChunk(content="o"),
]
assert prompt_spy.call_args.args[1] == {"question": "What is your name?"}
assert chat_spy.call_args.args[1] == ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"),
]
)
@freeze_time("2023-01-01")
async def test_prompt_with_llm(
mocker: MockerFixture, snapshot: SnapshotAssertion
) -> None:
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
llm = FakeListLLM(responses=["foo", "bar"])
chain: Runnable = prompt | llm
assert isinstance(chain, RunnableSequence)
assert chain.first == prompt
assert chain.middle == []
assert chain.last == llm
assert dumps(chain, pretty=True) == snapshot |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | prompt_spy = mocker.spy(prompt.__class__, "ainvoke")
llm_spy = mocker.spy(llm.__class__, "ainvoke")
tracer = FakeTracer()
assert (
await chain.ainvoke(
{"question": "What is your name?"}, dict(callbacks=[tracer])
)
== "foo"
)
assert prompt_spy.call_args.args[1] == {"question": "What is your name?"}
assert llm_spy.call_args.args[1] == ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"),
]
)
assert tracer.runs == snapshot
mocker.stop(prompt_spy)
mocker.stop(llm_spy)
prompt_spy = mocker.spy(prompt.__class__, "abatch")
llm_spy = mocker.spy(llm.__class__, "abatch")
tracer = FakeTracer()
assert await chain.abatch(
[
{"question": "What is your name?"},
{"question": "What is your favorite color?"},
],
dict(callbacks=[tracer]),
) == ["bar", "foo"] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | assert prompt_spy.call_args.args[1] == [
{"question": "What is your name?"},
{"question": "What is your favorite color?"},
]
assert llm_spy.call_args.args[1] == [
ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"),
]
),
ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your favorite color?"),
]
),
]
assert tracer.runs == snapshot
mocker.stop(prompt_spy)
mocker.stop(llm_spy)
prompt_spy = mocker.spy(prompt.__class__, "ainvoke")
llm_spy = mocker.spy(llm.__class__, "astream")
tracer = FakeTracer()
assert [
token
async for token in chain.astream(
{"question": "What is your name?"}, dict(callbacks=[tracer])
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | ] == ["bar"]
assert prompt_spy.call_args.args[1] == {"question": "What is your name?"}
assert llm_spy.call_args.args[1] == ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"),
]
)
prompt_spy.reset_mock()
llm_spy.reset_mock()
stream_log = [
part async for part in chain.astream_log({"question": "What is your name?"})
]
for part in stream_log:
for op in part.ops:
if (
isinstance(op["value"], dict)
and "id" in op["value"]
and not isinstance(op["value"]["id"], list)
):
del op["value"]["id"]
assert stream_log == [
RunLogPatch(
{
"op": "replace",
"path": "",
"value": {
"logs": {},
"final_output": None, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | "streamed_output": [],
},
}
),
RunLogPatch(
{
"op": "add",
"path": "/logs/ChatPromptTemplate",
"value": {
"end_time": None,
"final_output": None,
"metadata": {},
"name": "ChatPromptTemplate",
"start_time": "2023-01-01T00:00:00.000",
"streamed_output_str": [],
"tags": ["seq:step:1"],
"type": "prompt",
},
}
),
RunLogPatch(
{
"op": "add",
"path": "/logs/ChatPromptTemplate/final_output",
"value": ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"),
]
), |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | },
{
"op": "add",
"path": "/logs/ChatPromptTemplate/end_time",
"value": "2023-01-01T00:00:00.000",
},
),
RunLogPatch(
{
"op": "add",
"path": "/logs/FakeListLLM",
"value": {
"end_time": None,
"final_output": None,
"metadata": {},
"name": "FakeListLLM",
"start_time": "2023-01-01T00:00:00.000",
"streamed_output_str": [],
"tags": ["seq:step:2"],
"type": "llm",
},
}
),
RunLogPatch(
{
"op": "add",
"path": "/logs/FakeListLLM/final_output",
"value": {
"generations": [
[{"generation_info": None, "text": "foo", "type": "Generation"}] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | ],
"llm_output": None,
"run": None,
},
},
{
"op": "add",
"path": "/logs/FakeListLLM/end_time",
"value": "2023-01-01T00:00:00.000",
},
),
RunLogPatch({"op": "add", "path": "/streamed_output/-", "value": "foo"}),
RunLogPatch(
{"op": "replace", "path": "/final_output", "value": {"output": "foo"}}
),
]
@freeze_time("2023-01-01")
async def test_stream_log_retriever() -> None:
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{documents}"
+ "{question}"
)
llm = FakeListLLM(responses=["foo", "bar"])
chain: Runnable = (
{"documents": FakeRetriever(), "question": itemgetter("question")}
| prompt
| {"one": llm, "two": llm}
)
stream_log = [ |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | part async for part in chain.astream_log({"question": "What is your name?"})
]
for part in stream_log:
for op in part.ops:
if (
isinstance(op["value"], dict)
and "id" in op["value"]
and not isinstance(op["value"]["id"], list)
):
del op["value"]["id"]
assert sorted(cast(RunLog, add(stream_log)).state["logs"]) == [
"ChatPromptTemplate",
"FakeListLLM",
"FakeListLLM:2",
"Retriever",
"RunnableLambda",
"RunnableParallel",
"RunnableParallel:2",
]
@freeze_time("2023-01-01")
async def test_prompt_with_llm_and_async_lambda(
mocker: MockerFixture, snapshot: SnapshotAssertion
) -> None:
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
llm = FakeListLLM(responses=["foo", "bar"])
async def passthrough(input: Any) -> Any: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | return input
chain = prompt | llm | passthrough
assert isinstance(chain, RunnableSequence)
assert chain.first == prompt
assert chain.middle == [llm]
assert chain.last == RunnableLambda(func=passthrough)
assert dumps(chain, pretty=True) == snapshot
prompt_spy = mocker.spy(prompt.__class__, "ainvoke")
llm_spy = mocker.spy(llm.__class__, "ainvoke")
tracer = FakeTracer()
assert (
await chain.ainvoke(
{"question": "What is your name?"}, dict(callbacks=[tracer])
)
== "foo"
)
assert prompt_spy.call_args.args[1] == {"question": "What is your name?"}
assert llm_spy.call_args.args[1] == ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"),
]
)
assert tracer.runs == snapshot
mocker.stop(prompt_spy)
mocker.stop(llm_spy)
@freeze_time("2023-01-01")
def test_prompt_with_chat_model_and_parser( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | mocker: MockerFixture, snapshot: SnapshotAssertion
) -> None:
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
chat = FakeListChatModel(responses=["foo, bar"])
parser = CommaSeparatedListOutputParser()
chain = prompt | chat | parser
assert isinstance(chain, RunnableSequence)
assert chain.first == prompt
assert chain.middle == [chat]
assert chain.last == parser
assert dumps(chain, pretty=True) == snapshot
prompt_spy = mocker.spy(prompt.__class__, "invoke")
chat_spy = mocker.spy(chat.__class__, "invoke")
parser_spy = mocker.spy(parser.__class__, "invoke")
tracer = FakeTracer() |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | assert chain.invoke(
{"question": "What is your name?"}, dict(callbacks=[tracer])
) == ["foo", "bar"]
assert prompt_spy.call_args.args[1] == {"question": "What is your name?"}
assert chat_spy.call_args.args[1] == ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"),
]
)
assert parser_spy.call_args.args[1] == AIMessage(content="foo, bar")
assert tracer.runs == snapshot
@freeze_time("2023-01-01")
def test_combining_sequences(
mocker: MockerFixture, snapshot: SnapshotAssertion
) -> None:
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
chat = FakeListChatModel(responses=["foo, bar"])
parser = CommaSeparatedListOutputParser()
chain = prompt | chat | parser
assert isinstance(chain, RunnableSequence)
assert chain.first == prompt
assert chain.middle == [chat]
assert chain.last == parser
if sys.version_info >= (3, 9):
assert dumps(chain, pretty=True) == snapshot
prompt2 = ( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | SystemMessagePromptTemplate.from_template("You are a nicer assistant.")
+ "{question}"
)
chat2 = FakeListChatModel(responses=["baz, qux"])
parser2 = CommaSeparatedListOutputParser()
input_formatter: RunnableLambda[List[str], Dict[str, Any]] = RunnableLambda(
lambda x: {"question": x[0] + x[1]}
)
chain2 = cast(RunnableSequence, input_formatter | prompt2 | chat2 | parser2)
assert isinstance(chain, RunnableSequence)
assert chain2.first == input_formatter
assert chain2.middle == [prompt2, chat2]
assert chain2.last == parser2
if sys.version_info >= (3, 9):
assert dumps(chain2, pretty=True) == snapshot
combined_chain = cast(RunnableSequence, chain | chain2)
assert combined_chain.first == prompt
assert combined_chain.middle == [
chat,
parser,
input_formatter,
prompt2,
chat2,
]
assert combined_chain.last == parser2
if sys.version_info >= (3, 9):
assert dumps(combined_chain, pretty=True) == snapshot
tracer = FakeTracer()
assert combined_chain.invoke( |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.