status
stringclasses
1 value
repo_name
stringclasses
31 values
repo_url
stringclasses
31 values
issue_id
int64
1
104k
title
stringlengths
4
233
body
stringlengths
0
186k
issue_url
stringlengths
38
56
pull_url
stringlengths
37
54
before_fix_sha
stringlengths
40
40
after_fix_sha
stringlengths
40
40
report_datetime
unknown
language
stringclasses
5 values
commit_datetime
unknown
updated_file
stringlengths
7
188
chunk_content
stringlengths
1
1.03M
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,814
Create retriever for Outline to ask questions on knowledge base
### Feature request A retriever for documents from [Outline](https://github.com/outline/outline). The API has a search endpoint which allows this to be possible: https://www.getoutline.com/developers#tag/Documents/paths/~1documents.search/post The implementation will be similar to the Wikipedia retriever: https://python.langchain.com/docs/integrations/retrievers/wikipedia ### Motivation Outline is an open source project that let's you create a knowledge base, like a wiki. Creating a retriever for Outline will let your team interact with your knowledge base using an LLM. ### Your contribution PR will be coming soon.
https://github.com/langchain-ai/langchain/issues/11814
https://github.com/langchain-ai/langchain/pull/13889
f2af82058f4904b20ae95c6d17d2b65666bf882a
935f78c9449c40473541666a8b0a0dc61873b0eb
"2023-10-15T01:58:24Z"
python
"2023-11-27T02:56:12Z"
libs/langchain/tests/unit_tests/retrievers/test_imports.py
from langchain.retrievers import __all__ EXPECTED_ALL = [ "AmazonKendraRetriever", "ArceeRetriever", "ArxivRetriever", "AzureCognitiveSearchRetriever", "ChatGPTPluginRetriever", "ContextualCompressionRetriever", "ChaindeskRetriever", "CohereRagRetriever", "ElasticSearchBM25Retriever", "EmbedchainRetriever", "GoogleDocumentAIWarehouseRetriever", "GoogleCloudEnterpriseSearchRetriever", "GoogleVertexAIMultiTurnSearchRetriever", "GoogleVertexAISearchRetriever", "KayAiRetriever", "KNNRetriever",
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,814
Create retriever for Outline to ask questions on knowledge base
### Feature request A retriever for documents from [Outline](https://github.com/outline/outline). The API has a search endpoint which allows this to be possible: https://www.getoutline.com/developers#tag/Documents/paths/~1documents.search/post The implementation will be similar to the Wikipedia retriever: https://python.langchain.com/docs/integrations/retrievers/wikipedia ### Motivation Outline is an open source project that let's you create a knowledge base, like a wiki. Creating a retriever for Outline will let your team interact with your knowledge base using an LLM. ### Your contribution PR will be coming soon.
https://github.com/langchain-ai/langchain/issues/11814
https://github.com/langchain-ai/langchain/pull/13889
f2af82058f4904b20ae95c6d17d2b65666bf882a
935f78c9449c40473541666a8b0a0dc61873b0eb
"2023-10-15T01:58:24Z"
python
"2023-11-27T02:56:12Z"
libs/langchain/tests/unit_tests/retrievers/test_imports.py
"LlamaIndexGraphRetriever", "LlamaIndexRetriever", "MergerRetriever", "MetalRetriever", "MilvusRetriever", "MultiQueryRetriever", "PineconeHybridSearchRetriever", "PubMedRetriever", "RemoteLangChainRetriever", "SVMRetriever", "SelfQueryRetriever", "TavilySearchAPIRetriever", "TFIDFRetriever", "BM25Retriever", "TimeWeightedVectorStoreRetriever", "VespaRetriever", "WeaviateHybridSearchRetriever", "WikipediaRetriever", "ZepRetriever", "ZillizRetriever", "DocArrayRetriever", "RePhraseQueryRetriever", "WebResearchRetriever", "EnsembleRetriever", "ParentDocumentRetriever", "MultiVectorRetriever", ] def test_all_imports() -> None: assert set(__all__) == set(EXPECTED_ALL)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,814
Create retriever for Outline to ask questions on knowledge base
### Feature request A retriever for documents from [Outline](https://github.com/outline/outline). The API has a search endpoint which allows this to be possible: https://www.getoutline.com/developers#tag/Documents/paths/~1documents.search/post The implementation will be similar to the Wikipedia retriever: https://python.langchain.com/docs/integrations/retrievers/wikipedia ### Motivation Outline is an open source project that let's you create a knowledge base, like a wiki. Creating a retriever for Outline will let your team interact with your knowledge base using an LLM. ### Your contribution PR will be coming soon.
https://github.com/langchain-ai/langchain/issues/11814
https://github.com/langchain-ai/langchain/pull/13889
f2af82058f4904b20ae95c6d17d2b65666bf882a
935f78c9449c40473541666a8b0a0dc61873b0eb
"2023-10-15T01:58:24Z"
python
"2023-11-27T02:56:12Z"
libs/langchain/tests/unit_tests/utilities/test_imports.py
from langchain.utilities import __all__ EXPECTED_ALL = [ "AlphaVantageAPIWrapper", "ApifyWrapper", "ArceeWrapper", "ArxivAPIWrapper", "BibtexparserWrapper", "BingSearchAPIWrapper", "BraveSearchWrapper", "DuckDuckGoSearchAPIWrapper", "GoldenQueryAPIWrapper", "GooglePlacesAPIWrapper", "GoogleScholarAPIWrapper",
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
11,814
Create retriever for Outline to ask questions on knowledge base
### Feature request A retriever for documents from [Outline](https://github.com/outline/outline). The API has a search endpoint which allows this to be possible: https://www.getoutline.com/developers#tag/Documents/paths/~1documents.search/post The implementation will be similar to the Wikipedia retriever: https://python.langchain.com/docs/integrations/retrievers/wikipedia ### Motivation Outline is an open source project that let's you create a knowledge base, like a wiki. Creating a retriever for Outline will let your team interact with your knowledge base using an LLM. ### Your contribution PR will be coming soon.
https://github.com/langchain-ai/langchain/issues/11814
https://github.com/langchain-ai/langchain/pull/13889
f2af82058f4904b20ae95c6d17d2b65666bf882a
935f78c9449c40473541666a8b0a0dc61873b0eb
"2023-10-15T01:58:24Z"
python
"2023-11-27T02:56:12Z"
libs/langchain/tests/unit_tests/utilities/test_imports.py
"GoogleSearchAPIWrapper", "GoogleSerperAPIWrapper", "GraphQLAPIWrapper", "JiraAPIWrapper", "LambdaWrapper", "MaxComputeAPIWrapper", "MetaphorSearchAPIWrapper", "OpenWeatherMapAPIWrapper", "Portkey", "PowerBIDataset", "PubMedAPIWrapper", "PythonREPL", "Requests", "RequestsWrapper", "SQLDatabase", "SceneXplainAPIWrapper", "SearchApiAPIWrapper", "SearxSearchWrapper", "SerpAPIWrapper", "SparkSQL", "TensorflowDatasets", "TextRequestsWrapper", "TwilioAPIWrapper", "WikipediaAPIWrapper", "WolframAlphaAPIWrapper", "ZapierNLAWrapper", ] def test_all_imports() -> None: assert set(__all__) == set(EXPECTED_ALL)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,803
CypherQueryCorrector cannot validate a correct cypher, some query types are not handled
CypherQueryCorrector does not handle with some query types: If there is such a query (There is a comma in the match between clauses): ``` MATCH (a:APPLE {apple_id: 123})-[:IN]->(b:BUCKET), (ba:BANANA {name: banana1}) ``` Corresponding code section: - It extract a relation between BUCKET and BANANA, however there is none. - ELSE case should be divided into cases. (INCOMING relation, OUTGOING relation, BIDIRECTIONAL relation, NO relation etc.) - IF there is no relation and only a comma between clauses, then it should not try validation. https://github.com/langchain-ai/langchain/blob/751226e067bc54a70910763c0eebb34544aaf47c/libs/langchain/langchain/chains/graph_qa/cypher_utils.py#L228
https://github.com/langchain-ai/langchain/issues/13803
https://github.com/langchain-ai/langchain/pull/13849
e42e95cc11cc99b8597dc6665a3cfdbb0a0f37e6
1ad65f7a982affc9429d10a4919a78ffd54646dc
"2023-11-24T08:39:00Z"
python
"2023-11-27T03:30:11Z"
libs/langchain/langchain/chains/graph_qa/cypher_utils.py
import re from collections import namedtuple from typing import Any, Dict, List, Optional, Tuple Schema = namedtuple("Schema", ["left_node", "relation", "right_node"]) class CypherQueryCorrector: """ Used to correct relationship direction in generated Cypher statements. This code is copied from the winner's submission to the Cypher competition: https://github.com/sakusaku-rich/cypher-direction-competition """ property_pattern = re.compile(r"\{.+?\}") node_pattern = re.compile(r"\(.+?\)") path_pattern = re.compile(r"\(.*\).*-.*-.*\(.*\)") node_relation_node_pattern = re.compile( r"(\()+(?P<left_node>[^()]*?)\)(?P<relation>.*?)\((?P<right_node>[^()]*?)(\))+" ) relation_type_pattern = re.compile(r":(?P<relation_type>.+?)?(\{.+\})?]") def __init__(self, schemas: List[Schema]): """ Args: schemas: list of schemas """ self.schemas = schemas def clean_node(self, node: str) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,803
CypherQueryCorrector cannot validate a correct cypher, some query types are not handled
CypherQueryCorrector does not handle with some query types: If there is such a query (There is a comma in the match between clauses): ``` MATCH (a:APPLE {apple_id: 123})-[:IN]->(b:BUCKET), (ba:BANANA {name: banana1}) ``` Corresponding code section: - It extract a relation between BUCKET and BANANA, however there is none. - ELSE case should be divided into cases. (INCOMING relation, OUTGOING relation, BIDIRECTIONAL relation, NO relation etc.) - IF there is no relation and only a comma between clauses, then it should not try validation. https://github.com/langchain-ai/langchain/blob/751226e067bc54a70910763c0eebb34544aaf47c/libs/langchain/langchain/chains/graph_qa/cypher_utils.py#L228
https://github.com/langchain-ai/langchain/issues/13803
https://github.com/langchain-ai/langchain/pull/13849
e42e95cc11cc99b8597dc6665a3cfdbb0a0f37e6
1ad65f7a982affc9429d10a4919a78ffd54646dc
"2023-11-24T08:39:00Z"
python
"2023-11-27T03:30:11Z"
libs/langchain/langchain/chains/graph_qa/cypher_utils.py
""" Args: node: node in string format """ node = re.sub(self.property_pattern, "", node) node = node.replace("(", "") node = node.replace(")", "") node = node.strip() return node def detect_node_variables(self, query: str) -> Dict[str, List[str]]: """ Args: query: cypher query """ nodes = re.findall(self.node_pattern, query) nodes = [self.clean_node(node) for node in nodes] res: Dict[str, Any] = {} for node in nodes: parts = node.split(":") if parts == "": continue variable = parts[0] if variable not in res: res[variable] = [] res[variable] += parts[1:] return res def extract_paths(self, query: str) -> "List[str]":
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,803
CypherQueryCorrector cannot validate a correct cypher, some query types are not handled
CypherQueryCorrector does not handle with some query types: If there is such a query (There is a comma in the match between clauses): ``` MATCH (a:APPLE {apple_id: 123})-[:IN]->(b:BUCKET), (ba:BANANA {name: banana1}) ``` Corresponding code section: - It extract a relation between BUCKET and BANANA, however there is none. - ELSE case should be divided into cases. (INCOMING relation, OUTGOING relation, BIDIRECTIONAL relation, NO relation etc.) - IF there is no relation and only a comma between clauses, then it should not try validation. https://github.com/langchain-ai/langchain/blob/751226e067bc54a70910763c0eebb34544aaf47c/libs/langchain/langchain/chains/graph_qa/cypher_utils.py#L228
https://github.com/langchain-ai/langchain/issues/13803
https://github.com/langchain-ai/langchain/pull/13849
e42e95cc11cc99b8597dc6665a3cfdbb0a0f37e6
1ad65f7a982affc9429d10a4919a78ffd54646dc
"2023-11-24T08:39:00Z"
python
"2023-11-27T03:30:11Z"
libs/langchain/langchain/chains/graph_qa/cypher_utils.py
""" Args: query: cypher query """ return re.findall(self.path_pattern, query) def judge_direction(self, relation: str) -> str: """ Args: relation: relation in string format """ direction = "BIDIRECTIONAL" if relation[0] == "<": direction = "INCOMING" if relation[-1] == ">": direction = "OUTGOING" return direction def extract_node_variable(self, part: str) -> Optional[str]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,803
CypherQueryCorrector cannot validate a correct cypher, some query types are not handled
CypherQueryCorrector does not handle with some query types: If there is such a query (There is a comma in the match between clauses): ``` MATCH (a:APPLE {apple_id: 123})-[:IN]->(b:BUCKET), (ba:BANANA {name: banana1}) ``` Corresponding code section: - It extract a relation between BUCKET and BANANA, however there is none. - ELSE case should be divided into cases. (INCOMING relation, OUTGOING relation, BIDIRECTIONAL relation, NO relation etc.) - IF there is no relation and only a comma between clauses, then it should not try validation. https://github.com/langchain-ai/langchain/blob/751226e067bc54a70910763c0eebb34544aaf47c/libs/langchain/langchain/chains/graph_qa/cypher_utils.py#L228
https://github.com/langchain-ai/langchain/issues/13803
https://github.com/langchain-ai/langchain/pull/13849
e42e95cc11cc99b8597dc6665a3cfdbb0a0f37e6
1ad65f7a982affc9429d10a4919a78ffd54646dc
"2023-11-24T08:39:00Z"
python
"2023-11-27T03:30:11Z"
libs/langchain/langchain/chains/graph_qa/cypher_utils.py
""" Args: part: node in string format """ part = part.lstrip("(").rstrip(")") idx = part.find(":") if idx != -1: part = part[:idx] return None if part == "" else part def detect_labels( self, str_node: str, node_variable_dict: Dict[str, Any] ) -> List[str]: """ Args: str_node: node in string format node_variable_dict: dictionary of node variables """ splitted_node = str_node.split(":") variable = splitted_node[0] labels = [] if variable in node_variable_dict: labels = node_variable_dict[variable] elif variable == "" and len(splitted_node) > 1: labels = splitted_node[1:] return labels def verify_schema(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,803
CypherQueryCorrector cannot validate a correct cypher, some query types are not handled
CypherQueryCorrector does not handle with some query types: If there is such a query (There is a comma in the match between clauses): ``` MATCH (a:APPLE {apple_id: 123})-[:IN]->(b:BUCKET), (ba:BANANA {name: banana1}) ``` Corresponding code section: - It extract a relation between BUCKET and BANANA, however there is none. - ELSE case should be divided into cases. (INCOMING relation, OUTGOING relation, BIDIRECTIONAL relation, NO relation etc.) - IF there is no relation and only a comma between clauses, then it should not try validation. https://github.com/langchain-ai/langchain/blob/751226e067bc54a70910763c0eebb34544aaf47c/libs/langchain/langchain/chains/graph_qa/cypher_utils.py#L228
https://github.com/langchain-ai/langchain/issues/13803
https://github.com/langchain-ai/langchain/pull/13849
e42e95cc11cc99b8597dc6665a3cfdbb0a0f37e6
1ad65f7a982affc9429d10a4919a78ffd54646dc
"2023-11-24T08:39:00Z"
python
"2023-11-27T03:30:11Z"
libs/langchain/langchain/chains/graph_qa/cypher_utils.py
self, from_node_labels: List[str], relation_types: List[str], to_node_labels: List[str], ) -> bool: """ Args: from_node_labels: labels of the from node relation_type: type of the relation to_node_labels: labels of the to node """ valid_schemas = self.schemas if from_node_labels != []: from_node_labels = [label.strip("`") for label in from_node_labels] valid_schemas = [ schema for schema in valid_schemas if schema[0] in from_node_labels ] if to_node_labels != []: to_node_labels = [label.strip("`") for label in to_node_labels] valid_schemas = [ schema for schema in valid_schemas if schema[2] in to_node_labels ] if relation_types != []: relation_types = [type.strip("`") for type in relation_types] valid_schemas = [ schema for schema in valid_schemas if schema[1] in relation_types ] return valid_schemas != [] def detect_relation_types(self, str_relation: str) -> Tuple[str, List[str]]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,803
CypherQueryCorrector cannot validate a correct cypher, some query types are not handled
CypherQueryCorrector does not handle with some query types: If there is such a query (There is a comma in the match between clauses): ``` MATCH (a:APPLE {apple_id: 123})-[:IN]->(b:BUCKET), (ba:BANANA {name: banana1}) ``` Corresponding code section: - It extract a relation between BUCKET and BANANA, however there is none. - ELSE case should be divided into cases. (INCOMING relation, OUTGOING relation, BIDIRECTIONAL relation, NO relation etc.) - IF there is no relation and only a comma between clauses, then it should not try validation. https://github.com/langchain-ai/langchain/blob/751226e067bc54a70910763c0eebb34544aaf47c/libs/langchain/langchain/chains/graph_qa/cypher_utils.py#L228
https://github.com/langchain-ai/langchain/issues/13803
https://github.com/langchain-ai/langchain/pull/13849
e42e95cc11cc99b8597dc6665a3cfdbb0a0f37e6
1ad65f7a982affc9429d10a4919a78ffd54646dc
"2023-11-24T08:39:00Z"
python
"2023-11-27T03:30:11Z"
libs/langchain/langchain/chains/graph_qa/cypher_utils.py
""" Args: str_relation: relation in string format """ relation_direction = self.judge_direction(str_relation) relation_type = self.relation_type_pattern.search(str_relation) if relation_type is None or relation_type.group("relation_type") is None: return relation_direction, [] relation_types = [ t.strip().strip("!") for t in relation_type.group("relation_type").split("|") ] return relation_direction, relation_types def correct_query(self, query: str) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,803
CypherQueryCorrector cannot validate a correct cypher, some query types are not handled
CypherQueryCorrector does not handle with some query types: If there is such a query (There is a comma in the match between clauses): ``` MATCH (a:APPLE {apple_id: 123})-[:IN]->(b:BUCKET), (ba:BANANA {name: banana1}) ``` Corresponding code section: - It extract a relation between BUCKET and BANANA, however there is none. - ELSE case should be divided into cases. (INCOMING relation, OUTGOING relation, BIDIRECTIONAL relation, NO relation etc.) - IF there is no relation and only a comma between clauses, then it should not try validation. https://github.com/langchain-ai/langchain/blob/751226e067bc54a70910763c0eebb34544aaf47c/libs/langchain/langchain/chains/graph_qa/cypher_utils.py#L228
https://github.com/langchain-ai/langchain/issues/13803
https://github.com/langchain-ai/langchain/pull/13849
e42e95cc11cc99b8597dc6665a3cfdbb0a0f37e6
1ad65f7a982affc9429d10a4919a78ffd54646dc
"2023-11-24T08:39:00Z"
python
"2023-11-27T03:30:11Z"
libs/langchain/langchain/chains/graph_qa/cypher_utils.py
""" Args: query: cypher query """ node_variable_dict = self.detect_node_variables(query) paths = self.extract_paths(query) for path in paths: original_path = path start_idx = 0 while start_idx < len(path): match_res = re.match(self.node_relation_node_pattern, path[start_idx:]) if match_res is None: break start_idx += match_res.start() match_dict = match_res.groupdict() left_node_labels = self.detect_labels( match_dict["left_node"], node_variable_dict ) right_node_labels = self.detect_labels( match_dict["right_node"], node_variable_dict ) end_idx = ( start_idx + 4 + len(match_dict["left_node"]) + len(match_dict["relation"]) + len(match_dict["right_node"])
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,803
CypherQueryCorrector cannot validate a correct cypher, some query types are not handled
CypherQueryCorrector does not handle with some query types: If there is such a query (There is a comma in the match between clauses): ``` MATCH (a:APPLE {apple_id: 123})-[:IN]->(b:BUCKET), (ba:BANANA {name: banana1}) ``` Corresponding code section: - It extract a relation between BUCKET and BANANA, however there is none. - ELSE case should be divided into cases. (INCOMING relation, OUTGOING relation, BIDIRECTIONAL relation, NO relation etc.) - IF there is no relation and only a comma between clauses, then it should not try validation. https://github.com/langchain-ai/langchain/blob/751226e067bc54a70910763c0eebb34544aaf47c/libs/langchain/langchain/chains/graph_qa/cypher_utils.py#L228
https://github.com/langchain-ai/langchain/issues/13803
https://github.com/langchain-ai/langchain/pull/13849
e42e95cc11cc99b8597dc6665a3cfdbb0a0f37e6
1ad65f7a982affc9429d10a4919a78ffd54646dc
"2023-11-24T08:39:00Z"
python
"2023-11-27T03:30:11Z"
libs/langchain/langchain/chains/graph_qa/cypher_utils.py
) original_partial_path = original_path[start_idx : end_idx + 1] relation_direction, relation_types = self.detect_relation_types( match_dict["relation"] ) if relation_types != [] and "".join(relation_types).find("*") != -1: start_idx += ( len(match_dict["left_node"]) + len(match_dict["relation"]) + 2 ) continue if relation_direction == "OUTGOING": is_legal = self.verify_schema( left_node_labels, relation_types, right_node_labels ) if not is_legal: is_legal = self.verify_schema( right_node_labels, relation_types, left_node_labels ) if is_legal: corrected_relation = "<" + match_dict["relation"][:-1] corrected_partial_path = original_partial_path.replace( match_dict["relation"], corrected_relation ) query = query.replace( original_partial_path, corrected_partial_path ) else: return "" elif relation_direction == "INCOMING": is_legal = self.verify_schema(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,803
CypherQueryCorrector cannot validate a correct cypher, some query types are not handled
CypherQueryCorrector does not handle with some query types: If there is such a query (There is a comma in the match between clauses): ``` MATCH (a:APPLE {apple_id: 123})-[:IN]->(b:BUCKET), (ba:BANANA {name: banana1}) ``` Corresponding code section: - It extract a relation between BUCKET and BANANA, however there is none. - ELSE case should be divided into cases. (INCOMING relation, OUTGOING relation, BIDIRECTIONAL relation, NO relation etc.) - IF there is no relation and only a comma between clauses, then it should not try validation. https://github.com/langchain-ai/langchain/blob/751226e067bc54a70910763c0eebb34544aaf47c/libs/langchain/langchain/chains/graph_qa/cypher_utils.py#L228
https://github.com/langchain-ai/langchain/issues/13803
https://github.com/langchain-ai/langchain/pull/13849
e42e95cc11cc99b8597dc6665a3cfdbb0a0f37e6
1ad65f7a982affc9429d10a4919a78ffd54646dc
"2023-11-24T08:39:00Z"
python
"2023-11-27T03:30:11Z"
libs/langchain/langchain/chains/graph_qa/cypher_utils.py
right_node_labels, relation_types, left_node_labels ) if not is_legal: is_legal = self.verify_schema( left_node_labels, relation_types, right_node_labels ) if is_legal: corrected_relation = match_dict["relation"][1:] + ">" corrected_partial_path = original_partial_path.replace( match_dict["relation"], corrected_relation ) query = query.replace( original_partial_path, corrected_partial_path ) else: return "" else: is_legal = self.verify_schema( left_node_labels, relation_types, right_node_labels ) is_legal |= self.verify_schema( right_node_labels, relation_types, left_node_labels ) if not is_legal: return "" start_idx += ( len(match_dict["left_node"]) + len(match_dict["relation"]) + 2 ) return query def __call__(self, query: str) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,803
CypherQueryCorrector cannot validate a correct cypher, some query types are not handled
CypherQueryCorrector does not handle with some query types: If there is such a query (There is a comma in the match between clauses): ``` MATCH (a:APPLE {apple_id: 123})-[:IN]->(b:BUCKET), (ba:BANANA {name: banana1}) ``` Corresponding code section: - It extract a relation between BUCKET and BANANA, however there is none. - ELSE case should be divided into cases. (INCOMING relation, OUTGOING relation, BIDIRECTIONAL relation, NO relation etc.) - IF there is no relation and only a comma between clauses, then it should not try validation. https://github.com/langchain-ai/langchain/blob/751226e067bc54a70910763c0eebb34544aaf47c/libs/langchain/langchain/chains/graph_qa/cypher_utils.py#L228
https://github.com/langchain-ai/langchain/issues/13803
https://github.com/langchain-ai/langchain/pull/13849
e42e95cc11cc99b8597dc6665a3cfdbb0a0f37e6
1ad65f7a982affc9429d10a4919a78ffd54646dc
"2023-11-24T08:39:00Z"
python
"2023-11-27T03:30:11Z"
libs/langchain/langchain/chains/graph_qa/cypher_utils.py
"""Correct the query to make it valid. If Args: query: cypher query """ return self.correct_query(query)
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,667
Can not load chain with ChatPromptTemplate
### System Info LangChain version: `0.0.339` Python version: `3.10` ### Who can help? @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction When serializing the `ChatPromptTemplate`, it saves it in a JSON/YAML format like this: ``` {'input_variables': ['question', 'context'], 'output_parser': None, 'partial_variables': {}, 'messages': [{'prompt': {'input_variables': ['context', 'question'], 'output_parser': None, 'partial_variables': {}, 'template': "...", 'template_format': 'f-string', 'validate_template': True, '_type': 'prompt'}, 'additional_kwargs': {}}], '_type': 'chat'} ``` Note that the `_type` is "chat". However, LangChain's `load_prompt_from_config` [does not recognize "chat" as the supported prompt type](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/prompts/loading.py#L19). Here is a minimal example to reproduce the issue: ```python from langchain.prompts import ChatPromptTemplate from langchain.chains import RetrievalQA from langchain.llms import OpenAI from langchain.vectorstores import FAISS from langchain.embeddings import OpenAIEmbeddings from langchain.chains.loading import load_chain TEMPLATE = """Answer the question based on the context: {context} Question: {question} Answer: """ chat_prompt = ChatPromptTemplate.from_template(TEMPLATE) llm = OpenAI() def get_retriever(persist_dir = None): vectorstore = FAISS.from_texts( ["harrison worked at kensho"], embedding=OpenAIEmbeddings() ) return vectorstore.as_retriever() chain_with_chat_prompt = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=get_retriever(), chain_type_kwargs={"prompt": chat_prompt}, ) chain_with_prompt_saved_path = "./chain_with_prompt.yaml" chain_with_prompt.save(chain_with_prompt_saved_path) loaded_chain = load_chain(chain_with_prompt_saved_path, retriever=get_retriever()) ``` The above script failed with the error: `ValueError: Loading chat prompt not supported` ### Expected behavior Load a chain that contains `ChatPromptTemplate` should work.
https://github.com/langchain-ai/langchain/issues/13667
https://github.com/langchain-ai/langchain/pull/13818
8a3e0c9afa59487b191e411a4acae55e39db7fa9
b3e08f9239d5587fa61adfa9c58dafdf6b740a38
"2023-11-21T17:40:19Z"
python
"2023-11-27T16:39:50Z"
libs/core/langchain_core/prompts/loading.py
"""Load prompts.""" import json import logging from pathlib import Path from typing import Callable, Dict, Union import yaml from langchain_core.output_parsers.string import StrOutputParser from langchain_core.prompts.base import BasePromptTemplate from langchain_core.prompts.few_shot import FewShotPromptTemplate from langchain_core.prompts.prompt import PromptTemplate from langchain_core.utils import try_load_from_hub URL_BASE = "https://raw.githubusercontent.com/hwchase17/langchain-hub/master/prompts/" logger = logging.getLogger(__name__) def load_prompt_from_config(config: dict) -> BasePromptTemplate:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,667
Can not load chain with ChatPromptTemplate
### System Info LangChain version: `0.0.339` Python version: `3.10` ### Who can help? @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction When serializing the `ChatPromptTemplate`, it saves it in a JSON/YAML format like this: ``` {'input_variables': ['question', 'context'], 'output_parser': None, 'partial_variables': {}, 'messages': [{'prompt': {'input_variables': ['context', 'question'], 'output_parser': None, 'partial_variables': {}, 'template': "...", 'template_format': 'f-string', 'validate_template': True, '_type': 'prompt'}, 'additional_kwargs': {}}], '_type': 'chat'} ``` Note that the `_type` is "chat". However, LangChain's `load_prompt_from_config` [does not recognize "chat" as the supported prompt type](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/prompts/loading.py#L19). Here is a minimal example to reproduce the issue: ```python from langchain.prompts import ChatPromptTemplate from langchain.chains import RetrievalQA from langchain.llms import OpenAI from langchain.vectorstores import FAISS from langchain.embeddings import OpenAIEmbeddings from langchain.chains.loading import load_chain TEMPLATE = """Answer the question based on the context: {context} Question: {question} Answer: """ chat_prompt = ChatPromptTemplate.from_template(TEMPLATE) llm = OpenAI() def get_retriever(persist_dir = None): vectorstore = FAISS.from_texts( ["harrison worked at kensho"], embedding=OpenAIEmbeddings() ) return vectorstore.as_retriever() chain_with_chat_prompt = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=get_retriever(), chain_type_kwargs={"prompt": chat_prompt}, ) chain_with_prompt_saved_path = "./chain_with_prompt.yaml" chain_with_prompt.save(chain_with_prompt_saved_path) loaded_chain = load_chain(chain_with_prompt_saved_path, retriever=get_retriever()) ``` The above script failed with the error: `ValueError: Loading chat prompt not supported` ### Expected behavior Load a chain that contains `ChatPromptTemplate` should work.
https://github.com/langchain-ai/langchain/issues/13667
https://github.com/langchain-ai/langchain/pull/13818
8a3e0c9afa59487b191e411a4acae55e39db7fa9
b3e08f9239d5587fa61adfa9c58dafdf6b740a38
"2023-11-21T17:40:19Z"
python
"2023-11-27T16:39:50Z"
libs/core/langchain_core/prompts/loading.py
"""Load prompt from Config Dict.""" if "_type" not in config: logger.warning("No `_type` key found, defaulting to `prompt`.") config_type = config.pop("_type", "prompt") if config_type not in type_to_loader_dict: raise ValueError(f"Loading {config_type} prompt not supported") prompt_loader = type_to_loader_dict[config_type] return prompt_loader(config) def _load_template(var_name: str, config: dict) -> dict: """Load template from the path if applicable.""" if f"{var_name}_path" in config: if var_name in config: raise ValueError( f"Both `{var_name}_path` and `{var_name}` cannot be provided." ) template_path = Path(config.pop(f"{var_name}_path")) if template_path.suffix == ".txt": with open(template_path) as f: template = f.read() else: raise ValueError config[var_name] = template return config def _load_examples(config: dict) -> dict:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,667
Can not load chain with ChatPromptTemplate
### System Info LangChain version: `0.0.339` Python version: `3.10` ### Who can help? @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction When serializing the `ChatPromptTemplate`, it saves it in a JSON/YAML format like this: ``` {'input_variables': ['question', 'context'], 'output_parser': None, 'partial_variables': {}, 'messages': [{'prompt': {'input_variables': ['context', 'question'], 'output_parser': None, 'partial_variables': {}, 'template': "...", 'template_format': 'f-string', 'validate_template': True, '_type': 'prompt'}, 'additional_kwargs': {}}], '_type': 'chat'} ``` Note that the `_type` is "chat". However, LangChain's `load_prompt_from_config` [does not recognize "chat" as the supported prompt type](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/prompts/loading.py#L19). Here is a minimal example to reproduce the issue: ```python from langchain.prompts import ChatPromptTemplate from langchain.chains import RetrievalQA from langchain.llms import OpenAI from langchain.vectorstores import FAISS from langchain.embeddings import OpenAIEmbeddings from langchain.chains.loading import load_chain TEMPLATE = """Answer the question based on the context: {context} Question: {question} Answer: """ chat_prompt = ChatPromptTemplate.from_template(TEMPLATE) llm = OpenAI() def get_retriever(persist_dir = None): vectorstore = FAISS.from_texts( ["harrison worked at kensho"], embedding=OpenAIEmbeddings() ) return vectorstore.as_retriever() chain_with_chat_prompt = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=get_retriever(), chain_type_kwargs={"prompt": chat_prompt}, ) chain_with_prompt_saved_path = "./chain_with_prompt.yaml" chain_with_prompt.save(chain_with_prompt_saved_path) loaded_chain = load_chain(chain_with_prompt_saved_path, retriever=get_retriever()) ``` The above script failed with the error: `ValueError: Loading chat prompt not supported` ### Expected behavior Load a chain that contains `ChatPromptTemplate` should work.
https://github.com/langchain-ai/langchain/issues/13667
https://github.com/langchain-ai/langchain/pull/13818
8a3e0c9afa59487b191e411a4acae55e39db7fa9
b3e08f9239d5587fa61adfa9c58dafdf6b740a38
"2023-11-21T17:40:19Z"
python
"2023-11-27T16:39:50Z"
libs/core/langchain_core/prompts/loading.py
"""Load examples if necessary.""" if isinstance(config["examples"], list): pass elif isinstance(config["examples"], str): with open(config["examples"]) as f: if config["examples"].endswith(".json"): examples = json.load(f) elif config["examples"].endswith((".yaml", ".yml")): examples = yaml.safe_load(f) else: raise ValueError( "Invalid file format. Only json or yaml formats are supported." ) config["examples"] = examples else: raise ValueError("Invalid examples format. Only list or string are supported.") return config def _load_output_parser(config: dict) -> dict:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,667
Can not load chain with ChatPromptTemplate
### System Info LangChain version: `0.0.339` Python version: `3.10` ### Who can help? @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction When serializing the `ChatPromptTemplate`, it saves it in a JSON/YAML format like this: ``` {'input_variables': ['question', 'context'], 'output_parser': None, 'partial_variables': {}, 'messages': [{'prompt': {'input_variables': ['context', 'question'], 'output_parser': None, 'partial_variables': {}, 'template': "...", 'template_format': 'f-string', 'validate_template': True, '_type': 'prompt'}, 'additional_kwargs': {}}], '_type': 'chat'} ``` Note that the `_type` is "chat". However, LangChain's `load_prompt_from_config` [does not recognize "chat" as the supported prompt type](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/prompts/loading.py#L19). Here is a minimal example to reproduce the issue: ```python from langchain.prompts import ChatPromptTemplate from langchain.chains import RetrievalQA from langchain.llms import OpenAI from langchain.vectorstores import FAISS from langchain.embeddings import OpenAIEmbeddings from langchain.chains.loading import load_chain TEMPLATE = """Answer the question based on the context: {context} Question: {question} Answer: """ chat_prompt = ChatPromptTemplate.from_template(TEMPLATE) llm = OpenAI() def get_retriever(persist_dir = None): vectorstore = FAISS.from_texts( ["harrison worked at kensho"], embedding=OpenAIEmbeddings() ) return vectorstore.as_retriever() chain_with_chat_prompt = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=get_retriever(), chain_type_kwargs={"prompt": chat_prompt}, ) chain_with_prompt_saved_path = "./chain_with_prompt.yaml" chain_with_prompt.save(chain_with_prompt_saved_path) loaded_chain = load_chain(chain_with_prompt_saved_path, retriever=get_retriever()) ``` The above script failed with the error: `ValueError: Loading chat prompt not supported` ### Expected behavior Load a chain that contains `ChatPromptTemplate` should work.
https://github.com/langchain-ai/langchain/issues/13667
https://github.com/langchain-ai/langchain/pull/13818
8a3e0c9afa59487b191e411a4acae55e39db7fa9
b3e08f9239d5587fa61adfa9c58dafdf6b740a38
"2023-11-21T17:40:19Z"
python
"2023-11-27T16:39:50Z"
libs/core/langchain_core/prompts/loading.py
"""Load output parser.""" if "output_parser" in config and config["output_parser"]: _config = config.pop("output_parser") output_parser_type = _config.pop("_type") if output_parser_type == "default": output_parser = StrOutputParser(**_config) else: raise ValueError(f"Unsupported output parser {output_parser_type}") config["output_parser"] = output_parser return config def _load_few_shot_prompt(config: dict) -> FewShotPromptTemplate: """Load the "few shot" prompt from the config.""" config = _load_template("suffix", config) config = _load_template("prefix", config) if "example_prompt_path" in config: if "example_prompt" in config: raise ValueError( "Only one of example_prompt and example_prompt_path should " "be specified." ) config["example_prompt"] = load_prompt(config.pop("example_prompt_path")) else: config["example_prompt"] = load_prompt_from_config(config["example_prompt"]) config = _load_examples(config) config = _load_output_parser(config) return FewShotPromptTemplate(**config) def _load_prompt(config: dict) -> PromptTemplate:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,667
Can not load chain with ChatPromptTemplate
### System Info LangChain version: `0.0.339` Python version: `3.10` ### Who can help? @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction When serializing the `ChatPromptTemplate`, it saves it in a JSON/YAML format like this: ``` {'input_variables': ['question', 'context'], 'output_parser': None, 'partial_variables': {}, 'messages': [{'prompt': {'input_variables': ['context', 'question'], 'output_parser': None, 'partial_variables': {}, 'template': "...", 'template_format': 'f-string', 'validate_template': True, '_type': 'prompt'}, 'additional_kwargs': {}}], '_type': 'chat'} ``` Note that the `_type` is "chat". However, LangChain's `load_prompt_from_config` [does not recognize "chat" as the supported prompt type](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/prompts/loading.py#L19). Here is a minimal example to reproduce the issue: ```python from langchain.prompts import ChatPromptTemplate from langchain.chains import RetrievalQA from langchain.llms import OpenAI from langchain.vectorstores import FAISS from langchain.embeddings import OpenAIEmbeddings from langchain.chains.loading import load_chain TEMPLATE = """Answer the question based on the context: {context} Question: {question} Answer: """ chat_prompt = ChatPromptTemplate.from_template(TEMPLATE) llm = OpenAI() def get_retriever(persist_dir = None): vectorstore = FAISS.from_texts( ["harrison worked at kensho"], embedding=OpenAIEmbeddings() ) return vectorstore.as_retriever() chain_with_chat_prompt = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=get_retriever(), chain_type_kwargs={"prompt": chat_prompt}, ) chain_with_prompt_saved_path = "./chain_with_prompt.yaml" chain_with_prompt.save(chain_with_prompt_saved_path) loaded_chain = load_chain(chain_with_prompt_saved_path, retriever=get_retriever()) ``` The above script failed with the error: `ValueError: Loading chat prompt not supported` ### Expected behavior Load a chain that contains `ChatPromptTemplate` should work.
https://github.com/langchain-ai/langchain/issues/13667
https://github.com/langchain-ai/langchain/pull/13818
8a3e0c9afa59487b191e411a4acae55e39db7fa9
b3e08f9239d5587fa61adfa9c58dafdf6b740a38
"2023-11-21T17:40:19Z"
python
"2023-11-27T16:39:50Z"
libs/core/langchain_core/prompts/loading.py
"""Load the prompt template from config.""" config = _load_template("template", config) config = _load_output_parser(config) template_format = config.get("template_format", "f-string") if template_format == "jinja2": raise ValueError( f"Loading templates with '{template_format}' format is no longer supported " f"since it can lead to arbitrary code execution. Please migrate to using " f"the 'f-string' template format, which does not suffer from this issue." ) return PromptTemplate(**config) def load_prompt(path: Union[str, Path]) -> BasePromptTemplate:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,667
Can not load chain with ChatPromptTemplate
### System Info LangChain version: `0.0.339` Python version: `3.10` ### Who can help? @hwchase17 ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [X] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [ ] Chains - [ ] Callbacks/Tracing - [ ] Async ### Reproduction When serializing the `ChatPromptTemplate`, it saves it in a JSON/YAML format like this: ``` {'input_variables': ['question', 'context'], 'output_parser': None, 'partial_variables': {}, 'messages': [{'prompt': {'input_variables': ['context', 'question'], 'output_parser': None, 'partial_variables': {}, 'template': "...", 'template_format': 'f-string', 'validate_template': True, '_type': 'prompt'}, 'additional_kwargs': {}}], '_type': 'chat'} ``` Note that the `_type` is "chat". However, LangChain's `load_prompt_from_config` [does not recognize "chat" as the supported prompt type](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/prompts/loading.py#L19). Here is a minimal example to reproduce the issue: ```python from langchain.prompts import ChatPromptTemplate from langchain.chains import RetrievalQA from langchain.llms import OpenAI from langchain.vectorstores import FAISS from langchain.embeddings import OpenAIEmbeddings from langchain.chains.loading import load_chain TEMPLATE = """Answer the question based on the context: {context} Question: {question} Answer: """ chat_prompt = ChatPromptTemplate.from_template(TEMPLATE) llm = OpenAI() def get_retriever(persist_dir = None): vectorstore = FAISS.from_texts( ["harrison worked at kensho"], embedding=OpenAIEmbeddings() ) return vectorstore.as_retriever() chain_with_chat_prompt = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=get_retriever(), chain_type_kwargs={"prompt": chat_prompt}, ) chain_with_prompt_saved_path = "./chain_with_prompt.yaml" chain_with_prompt.save(chain_with_prompt_saved_path) loaded_chain = load_chain(chain_with_prompt_saved_path, retriever=get_retriever()) ``` The above script failed with the error: `ValueError: Loading chat prompt not supported` ### Expected behavior Load a chain that contains `ChatPromptTemplate` should work.
https://github.com/langchain-ai/langchain/issues/13667
https://github.com/langchain-ai/langchain/pull/13818
8a3e0c9afa59487b191e411a4acae55e39db7fa9
b3e08f9239d5587fa61adfa9c58dafdf6b740a38
"2023-11-21T17:40:19Z"
python
"2023-11-27T16:39:50Z"
libs/core/langchain_core/prompts/loading.py
"""Unified method for loading a prompt from LangChainHub or local fs.""" if hub_result := try_load_from_hub( path, _load_prompt_from_file, "prompts", {"py", "json", "yaml"} ): return hub_result else: return _load_prompt_from_file(path) def _load_prompt_from_file(file: Union[str, Path]) -> BasePromptTemplate: """Load prompt from file.""" if isinstance(file, str): file_path = Path(file) else: file_path = file if file_path.suffix == ".json": with open(file_path) as f: config = json.load(f) elif file_path.suffix == ".yaml": with open(file_path, "r") as f: config = yaml.safe_load(f) else: raise ValueError(f"Got unsupported file type {file_path.suffix}") return load_prompt_from_config(config) type_to_loader_dict: Dict[str, Callable[[dict], BasePromptTemplate]] = { "prompt": _load_prompt, "few_shot": _load_few_shot_prompt, }
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
from __future__ import annotations import asyncio import inspect import threading from abc import ABC, abstractmethod from concurrent.futures import FIRST_COMPLETED, wait from functools import partial from itertools import tee from operator import itemgetter from typing import ( TYPE_CHECKING, Any, AsyncIterator,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
Awaitable, Callable, Dict, Generic, Iterator, List, Mapping, Optional, Sequence, Tuple, Type, TypeVar, Union, cast, overload, ) from typing_extensions import Literal, get_args from langchain_core.load.dump import dumpd from langchain_core.load.serializable import Serializable from langchain_core.pydantic_v1 import BaseModel, Field, create_model from langchain_core.runnables.config import ( RunnableConfig, acall_func_with_variable_args, call_func_with_variable_args, ensure_config, get_async_callback_manager_for_config, get_callback_manager_for_config, get_config_list, get_executor_for_config, merge_configs,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
patch_config, ) from langchain_core.runnables.utils import ( AddableDict, AnyConfigurableField, ConfigurableField, ConfigurableFieldSpec, Input, Output, accepts_config, accepts_run_manager, gather_with_concurrency, get_function_first_arg_dict_keys, get_lambda_source, get_unique_config_specs, indent_lines_after_first, ) from langchain_core.utils.aiter import atee, py_anext from langchain_core.utils.iter import safetee if TYPE_CHECKING: from langchain_core.callbacks.manager import ( AsyncCallbackManagerForChainRun, CallbackManagerForChainRun, ) from langchain_core.runnables.fallbacks import ( RunnableWithFallbacks as RunnableWithFallbacksT, ) from langchain_core.tracers.log_stream import RunLog, RunLogPatch from langchain_core.tracers.root_listeners import Listener Other = TypeVar("Other")
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
class Runnable(Generic[Input, Output], ABC): """A unit of work that can be invoked, batched, streamed, transformed and composed. Key Methods =========== * invoke/ainvoke: Transforms a single input into an output. * batch/abatch: Efficiently transforms multiple inputs into outputs. * stream/astream: Streams output from a single input as it's produced. * astream_log: Streams output and selected intermediate results from an input. Built-in optimizations: * Batch: By default, batch runs invoke() in parallel using a thread pool executor. Override to optimize batching. * Async: Methods with "a" suffix are asynchronous. By default, they execute the sync counterpart using asyncio's thread pool. Override for native async. All methods accept an optional config argument, which can be used to configure execution, add tags and metadata for tracing and debugging etc. Runnables expose schematic information about their input, output and config via the input_schema property, the output_schema property and config_schema method. LCEL and Composition ==================== The LangChain Expression Language (LCEL) is a declarative way to compose Runnables into chains. Any chain constructed this way will automatically have sync, async, batch, and streaming support. The main composition primitives are RunnableSequence and RunnableParallel. RunnableSequence invokes a series of runnables sequentially, with one runnable's output serving as the next's input. Construct using the `|` operator or by passing a list of runnables to RunnableSequence. RunnableParallel invokes runnables concurrently, providing the same input to each. Construct it using a dict literal within a sequence or by passing a dict to RunnableParallel.
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
For example, .. code-block:: python from langchain_core.runnables import RunnableLambda # A RunnableSequence constructed using the `|` operator sequence = RunnableLambda(lambda x: x + 1) | RunnableLambda(lambda x: x * 2) sequence.invoke(1) # 4 sequence.batch([1, 2, 3]) # [4, 6, 8] # A sequence that contains a RunnableParallel constructed using a dict literal sequence = RunnableLambda(lambda x: x + 1) | { 'mul_2': RunnableLambda(lambda x: x * 2), 'mul_5': RunnableLambda(lambda x: x * 5) } sequence.invoke(1) # {'mul_2': 4, 'mul_5': 10} Standard Methods ================ All Runnables expose additional methods that can be used to modify their behavior (e.g., add a retry policy, add lifecycle listeners, make them configurable, etc.). These methods will work on any Runnable, including Runnable chains constructed by composing other Runnables. See the individual methods for details. For example, .. code-block:: python from langchain_core.runnables import RunnableLambda import random def add_one(x: int) -> int: return x + 1 def buggy_double(y: int) -> int: '''Buggy code that will fail 70% of the time''' if random.random() > 0.3: print('This code failed, and will probably be retried!') raise ValueError('Triggered buggy code')
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
return y * 2 sequence = ( RunnableLambda(add_one) | RunnableLambda(buggy_double).with_retry( # Retry on failure stop_after_attempt=10, wait_exponential_jitter=False ) ) print(sequence.input_schema.schema()) # Show inferred input schema print(sequence.output_schema.schema()) # Show inferred output schema print(sequence.invoke(2)) the sequence (note the retry above!!) Debugging and tracing ===================== As the chains get longer, it can be useful to be able to see intermediate results to debug and trace the chain. You can set the global debug flag to True to enable debug output for all chains: .. code-block:: python from langchain_core.globals import set_debug set_debug(True) Alternatively, you can pass existing or custom callbacks to any given chain: .. code-block:: python from langchain_core.tracers import ConsoleCallbackHandler chain.invoke( ..., config={'callbacks': [ConsoleCallbackHandler()]} ) For a UI (and much more) checkout LangSmith: https://docs.smith.langchain.com/ """ @property def InputType(self) -> Type[Input]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
"""The type of input this runnable accepts specified as a type annotation.""" for cls in self.__class__.__orig_bases__: type_args = get_args(cls) if type_args and len(type_args) == 2: return type_args[0] raise TypeError( f"Runnable {self.__class__.__name__} doesn't have an inferable InputType. " "Override the InputType property to specify the input type." ) @property def OutputType(self) -> Type[Output]: """The type of output this runnable produces specified as a type annotation.""" for cls in self.__class__.__orig_bases__: type_args = get_args(cls) if type_args and len(type_args) == 2: return type_args[1] raise TypeError( f"Runnable {self.__class__.__name__} doesn't have an inferable OutputType. " "Override the OutputType property to specify the output type." ) @property def input_schema(self) -> Type[BaseModel]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
"""The type of input this runnable accepts specified as a pydantic model.""" return self.get_input_schema() def get_input_schema( self, config: Optional[RunnableConfig] = None ) -> Type[BaseModel]: """Get a pydantic model that can be used to validate input to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic input schema that depends on which configuration the runnable is invoked with. This method allows to get an input schema for a specific configuration. Args: config: A config to use when generating the schema. Returns: A pydantic model that can be used to validate input. """ root_type = self.InputType if inspect.isclass(root_type) and issubclass(root_type, BaseModel): return root_type return create_model( self.__class__.__name__ + "Input", __root__=(root_type, None) ) @property def output_schema(self) -> Type[BaseModel]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
"""The type of output this runnable produces specified as a pydantic model.""" return self.get_output_schema() def get_output_schema( self, config: Optional[RunnableConfig] = None ) -> Type[BaseModel]: """Get a pydantic model that can be used to validate output to the runnable. Runnables that leverage the configurable_fields and configurable_alternatives methods will have a dynamic output schema that depends on which configuration the runnable is invoked with. This method allows to get an output schema for a specific configuration. Args: config: A config to use when generating the schema. Returns: A pydantic model that can be used to validate output. """ root_type = self.OutputType if inspect.isclass(root_type) and issubclass(root_type, BaseModel): return root_type return create_model( self.__class__.__name__ + "Output", __root__=(root_type, None) ) @property def config_specs(self) -> List[ConfigurableFieldSpec]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
"""List configurable fields for this runnable.""" return [] def config_schema( self, *, include: Optional[Sequence[str]] = None ) -> Type[BaseModel]: """The type of config this runnable accepts specified as a pydantic model. To mark a field as configurable, see the `configurable_fields` and `configurable_alternatives` methods. Args: include: A list of fields to include in the config schema. Returns: A pydantic model that can be used to validate config. """ class _Config:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
arbitrary_types_allowed = True include = include or [] config_specs = self.config_specs configurable = ( create_model( "Configurable", **{ spec.id: ( spec.annotation, Field( spec.default, title=spec.name, description=spec.description ), ) for spec in config_specs }, ) if config_specs else None ) return create_model( self.__class__.__name__ + "Config", __config__=_Config, **({"configurable": (configurable, None)} if configurable else {}), **{ field_name: (field_type, None) for field_name, field_type in RunnableConfig.__annotations__.items() if field_name in [i for i in include if i != "configurable"] }, ) def __or__(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, other: Union[ Runnable[Any, Other], Callable[[Any], Other], Callable[[Iterator[Any]], Iterator[Other]], Mapping[str, Union[Runnable[Any, Other], Callable[[Any], Other], Any]], ], ) -> RunnableSerializable[Input, Other]: """Compose this runnable with another object to create a RunnableSequence.""" return RunnableSequence(first=self, last=coerce_to_runnable(other)) def __ror__(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, other: Union[ Runnable[Other, Any], Callable[[Other], Any], Callable[[Iterator[Other]], Iterator[Any]], Mapping[str, Union[Runnable[Other, Any], Callable[[Other], Any], Any]], ], ) -> RunnableSerializable[Other, Output]: """Compose this runnable with another object to create a RunnableSequence.""" return RunnableSequence(first=coerce_to_runnable(other), last=self) """ --- Public API --- """ @abstractmethod def invoke(self, input: Input, config: Optional[RunnableConfig] = None) -> Output: """Transform a single input into an output. Override to implement. Args: input: The input to the runnable. config: A config to use when invoking the runnable. The config supports standard keys like 'tags', 'metadata' for tracing purposes, 'max_concurrency' for controlling how much work to do in parallel, and other keys. Please refer to the RunnableConfig for more details. Returns: The output of the runnable. """ async def ainvoke(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any ) -> Output: """Default implementation of ainvoke, calls invoke from a thread. The default implementation allows usage of async code even if the runnable did not implement a native async version of invoke. Subclasses should override this method if they can run asynchronously. """ return await asyncio.get_running_loop().run_in_executor( None, partial(self.invoke, **kwargs), input, config ) def batch( self, inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any], ) -> List[Output]: """Default implementation runs invoke in parallel using a thread pool executor. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. """ if not inputs: return [] configs = get_config_list(config, len(inputs)) def invoke(input: Input, config: RunnableConfig) -> Union[Output, Exception]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
if return_exceptions: try: return self.invoke(input, config, **kwargs) except Exception as e: return e else: return self.invoke(input, config, **kwargs) if len(inputs) == 1: return cast(List[Output], [invoke(inputs[0], configs[0])]) with get_executor_for_config(configs[0]) as executor: return cast(List[Output], list(executor.map(invoke, inputs, configs))) async def abatch( self, inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any], ) -> List[Output]: """Default implementation runs ainvoke in parallel using asyncio.gather. The default implementation of batch works well for IO bound runnables. Subclasses should override this method if they can batch more efficiently; e.g., if the underlying runnable uses an API which supports a batch mode. """ if not inputs: return [] configs = get_config_list(config, len(inputs)) async def ainvoke(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
input: Input, config: RunnableConfig ) -> Union[Output, Exception]: if return_exceptions: try: return await self.ainvoke(input, config, **kwargs) except Exception as e: return e else: return await self.ainvoke(input, config, **kwargs) coros = map(ainvoke, inputs, configs) return await gather_with_concurrency(configs[0].get("max_concurrency"), *coros) def stream( self, input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any], ) -> Iterator[Output]: """ Default implementation of stream, which calls invoke. Subclasses should override this method if they support streaming output. """ yield self.invoke(input, config, **kwargs) async def astream(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any], ) -> AsyncIterator[Output]: """ Default implementation of astream, which calls ainvoke. Subclasses should override this method if they support streaming output. """ yield await self.ainvoke(input, config, **kwargs) @overload def astream_log( self, input: Any, config: Optional[RunnableConfig] = None, *, diff: Literal[True] = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any], ) -> AsyncIterator[RunLogPatch]: ... @overload def astream_log(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, input: Any, config: Optional[RunnableConfig] = None, *, diff: Literal[False], include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any], ) -> AsyncIterator[RunLog]: ... async def astream_log(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, input: Any, config: Optional[RunnableConfig] = None, *, diff: bool = True, include_names: Optional[Sequence[str]] = None, include_types: Optional[Sequence[str]] = None, include_tags: Optional[Sequence[str]] = None, exclude_names: Optional[Sequence[str]] = None, exclude_types: Optional[Sequence[str]] = None, exclude_tags: Optional[Sequence[str]] = None, **kwargs: Optional[Any], ) -> Union[AsyncIterator[RunLogPatch], AsyncIterator[RunLog]]: """ Stream all output from a runnable, as reported to the callback system. This includes all inner runs of LLMs, Retrievers, Tools, etc. Output is streamed as Log objects, which include a list of jsonpatch ops that describe how the state of the run has changed in each step, and the final state of the run. The jsonpatch ops can be applied in order to construct state. """ from langchain_core.callbacks.base import BaseCallbackManager from langchain_core.tracers.log_stream import ( LogStreamCallbackHandler, RunLog, RunLogPatch,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
) stream = LogStreamCallbackHandler( auto_close=False, include_names=include_names, include_types=include_types, include_tags=include_tags, exclude_names=exclude_names, exclude_types=exclude_types, exclude_tags=exclude_tags, ) config = config or {} callbacks = config.get("callbacks") if callbacks is None: config["callbacks"] = [stream] elif isinstance(callbacks, list): config["callbacks"] = callbacks + [stream] elif isinstance(callbacks, BaseCallbackManager): callbacks = callbacks.copy() callbacks.add_handler(stream, inherit=True) config["callbacks"] = callbacks else: raise ValueError( f"Unexpected type for callbacks: {callbacks}." "Expected None, list or AsyncCallbackManager." ) async def consume_astream() -> None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
try: async for chunk in self.astream(input, config, **kwargs): await stream.send_stream.send( RunLogPatch( { "op": "add", "path": "/streamed_output/-", "value": chunk, } ) ) finally: await stream.send_stream.aclose() task = asyncio.create_task(consume_astream()) try: if diff: async for log in stream: yield log else: state = RunLog(state=None) async for log in stream: state = state + log yield state
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
finally: try: await task except asyncio.CancelledError: pass def transform( self, input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any], ) -> Iterator[Output]: """ Default implementation of transform, which buffers input and then calls stream. Subclasses should override this method if they can start producing output while input is still being generated. """ final: Input got_first_val = False for chunk in input: if not got_first_val: final = chunk got_first_val = True else: final = final + chunk if got_first_val: yield from self.stream(final, config, **kwargs) async def atransform(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any], ) -> AsyncIterator[Output]: """ Default implementation of atransform, which buffers input and calls astream. Subclasses should override this method if they can start producing output while input is still being generated. """ final: Input got_first_val = False async for chunk in input: if not got_first_val: final = chunk got_first_val = True else: final = final + chunk if got_first_val: async for output in self.astream(final, config, **kwargs): yield output def bind(self, **kwargs: Any) -> Runnable[Input, Output]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
""" Bind arguments to a Runnable, returning a new Runnable. """ return RunnableBinding(bound=self, kwargs=kwargs, config={}) def with_config( self, config: Optional[RunnableConfig] = None, **kwargs: Any, ) -> Runnable[Input, Output]: """ Bind config to a Runnable, returning a new Runnable. """ return RunnableBinding( bound=self, config=cast( RunnableConfig, {**(config or {}), **kwargs}, ), kwargs={}, ) def with_listeners( self, *,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
on_start: Optional[Listener] = None, on_end: Optional[Listener] = None, on_error: Optional[Listener] = None, ) -> Runnable[Input, Output]: """ Bind lifecycle listeners to a Runnable, returning a new Runnable. on_start: Called before the runnable starts running, with the Run object. on_end: Called after the runnable finishes running, with the Run object. on_error: Called if the runnable throws an error, with the Run object. The Run object contains information about the run, including its id, type, input, output, error, start_time, end_time, and any tags or metadata added to the run. """ from langchain_core.tracers.root_listeners import RootListenersTracer return RunnableBinding( bound=self, config_factories=[ lambda config: { "callbacks": [ RootListenersTracer( config=config, on_start=on_start, on_end=on_end, on_error=on_error, ) ], } ], ) def with_types(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, *, input_type: Optional[Type[Input]] = None, output_type: Optional[Type[Output]] = None, ) -> Runnable[Input, Output]: """ Bind input and output types to a Runnable, returning a new Runnable. """ return RunnableBinding( bound=self, custom_input_type=input_type, custom_output_type=output_type, kwargs={}, ) def with_retry(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, *, retry_if_exception_type: Tuple[Type[BaseException], ...] = (Exception,), wait_exponential_jitter: bool = True, stop_after_attempt: int = 3, ) -> Runnable[Input, Output]: """Create a new Runnable that retries the original runnable on exceptions. Args: retry_if_exception_type: A tuple of exception types to retry on wait_exponential_jitter: Whether to add jitter to the wait time between retries stop_after_attempt: The maximum number of attempts to make before giving up Returns: A new Runnable that retries the original runnable on exceptions. """ from langchain_core.runnables.retry import RunnableRetry return RunnableRetry( bound=self, kwargs={}, config={}, retry_exception_types=retry_if_exception_type, wait_exponential_jitter=wait_exponential_jitter, max_attempt_number=stop_after_attempt, ) def map(self) -> Runnable[List[Input], List[Output]]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
""" Return a new Runnable that maps a list of inputs to a list of outputs, by calling invoke() with each input. """ return RunnableEach(bound=self) def with_fallbacks( self, fallbacks: Sequence[Runnable[Input, Output]], *, exceptions_to_handle: Tuple[Type[BaseException], ...] = (Exception,), ) -> RunnableWithFallbacksT[Input, Output]: """Add fallbacks to a runnable, returning a new Runnable. Args: fallbacks: A sequence of runnables to try if the original runnable fails. exceptions_to_handle: A tuple of exception types to handle. Returns: A new Runnable that will try the original runnable, and then each fallback in order, upon failures. """ from langchain_core.runnables.fallbacks import RunnableWithFallbacks return RunnableWithFallbacks( runnable=self, fallbacks=fallbacks, exceptions_to_handle=exceptions_to_handle, ) """ --- Helper methods for Subclasses --- """ def _call_with_config(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, func: Union[ Callable[[Input], Output], Callable[[Input, CallbackManagerForChainRun], Output], Callable[[Input, CallbackManagerForChainRun, RunnableConfig], Output], ], input: Input, config: Optional[RunnableConfig], run_type: Optional[str] = None, **kwargs: Optional[Any], ) -> Output: """Helper method to transform an Input value to an Output value, with callbacks. Use this method to implement invoke() in subclasses.""" config = ensure_config(config) callback_manager = get_callback_manager_for_config(config) run_manager = callback_manager.on_chain_start( dumpd(self), input, run_type=run_type, name=config.get("run_name"), ) try: output = call_func_with_variable_args( func, input, config, run_manager, **kwargs ) except BaseException as e:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
run_manager.on_chain_error(e) raise else: run_manager.on_chain_end(dumpd(output)) return output async def _acall_with_config( self, func: Union[ Callable[[Input], Awaitable[Output]], Callable[[Input, AsyncCallbackManagerForChainRun], Awaitable[Output]], Callable[ [Input, AsyncCallbackManagerForChainRun, RunnableConfig], Awaitable[Output], ], ], input: Input, config: Optional[RunnableConfig], run_type: Optional[str] = None, **kwargs: Optional[Any], ) -> Output: """Helper method to transform an Input value to an Output value, with callbacks. Use this method to implement ainvoke() in subclasses.""" config = ensure_config(config) callback_manager = get_async_callback_manager_for_config(config) run_manager = await callback_manager.on_chain_start( dumpd(self), input, run_type=run_type, name=config.get("run_name"), )
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
try: output = await acall_func_with_variable_args( func, input, config, run_manager, **kwargs ) except BaseException as e: await run_manager.on_chain_error(e) raise else: await run_manager.on_chain_end(dumpd(output)) return output def _batch_with_config( self, func: Union[ Callable[[List[Input]], List[Union[Exception, Output]]], Callable[ [List[Input], List[CallbackManagerForChainRun]], List[Union[Exception, Output]], ], Callable[ [List[Input], List[CallbackManagerForChainRun], List[RunnableConfig]], List[Union[Exception, Output]], ], ], input: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, run_type: Optional[str] = None, **kwargs: Optional[Any], ) -> List[Output]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
"""Helper method to transform an Input value to an Output value, with callbacks. Use this method to implement invoke() in subclasses.""" if not input: return [] configs = get_config_list(config, len(input)) callback_managers = [get_callback_manager_for_config(c) for c in configs] run_managers = [ callback_manager.on_chain_start( dumpd(self), input, run_type=run_type, name=config.get("run_name"), ) for callback_manager, input, config in zip( callback_managers, input, configs ) ] try: if accepts_config(func): kwargs["config"] = [ patch_config(c, callbacks=rm.get_child()) for c, rm in zip(configs, run_managers) ] if accepts_run_manager(func): kwargs["run_manager"] = run_managers output = func(input, **kwargs) except BaseException as e: for run_manager in run_managers: run_manager.on_chain_error(e) if return_exceptions:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
return cast(List[Output], [e for _ in input]) else: raise else: first_exception: Optional[Exception] = None for run_manager, out in zip(run_managers, output): if isinstance(out, Exception): first_exception = first_exception or out run_manager.on_chain_error(out) else: run_manager.on_chain_end(dumpd(out)) if return_exceptions or first_exception is None: return cast(List[Output], output) else: raise first_exception async def _abatch_with_config( self, func: Union[ Callable[[List[Input]], Awaitable[List[Union[Exception, Output]]]], Callable[ [List[Input], List[AsyncCallbackManagerForChainRun]], Awaitable[List[Union[Exception, Output]]], ], Callable[ [ List[Input], List[AsyncCallbackManagerForChainRun], List[RunnableConfig], ], Awaitable[List[Union[Exception, Output]]],
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
], ], input: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, run_type: Optional[str] = None, **kwargs: Optional[Any], ) -> List[Output]: """Helper method to transform an Input value to an Output value, with callbacks. Use this method to implement invoke() in subclasses.""" if not input: return [] configs = get_config_list(config, len(input)) callback_managers = [get_async_callback_manager_for_config(c) for c in configs] run_managers: List[AsyncCallbackManagerForChainRun] = await asyncio.gather( *( callback_manager.on_chain_start( dumpd(self), input, run_type=run_type, name=config.get("run_name"), ) for callback_manager, input, config in zip( callback_managers, input, configs ) ) ) try: if accepts_config(func):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
kwargs["config"] = [ patch_config(c, callbacks=rm.get_child()) for c, rm in zip(configs, run_managers) ] if accepts_run_manager(func): kwargs["run_manager"] = run_managers output = await func(input, **kwargs) except BaseException as e: await asyncio.gather( *(run_manager.on_chain_error(e) for run_manager in run_managers) ) if return_exceptions: return cast(List[Output], [e for _ in input]) else: raise else: first_exception: Optional[Exception] = None coros: List[Awaitable[None]] = [] for run_manager, out in zip(run_managers, output): if isinstance(out, Exception): first_exception = first_exception or out coros.append(run_manager.on_chain_error(out)) else: coros.append(run_manager.on_chain_end(dumpd(out))) await asyncio.gather(*coros) if return_exceptions or first_exception is None: return cast(List[Output], output) else: raise first_exception def _transform_stream_with_config(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, input: Iterator[Input], transformer: Union[ Callable[[Iterator[Input]], Iterator[Output]], Callable[[Iterator[Input], CallbackManagerForChainRun], Iterator[Output]], Callable[ [ Iterator[Input], CallbackManagerForChainRun, RunnableConfig, ], Iterator[Output], ], ], config: Optional[RunnableConfig], run_type: Optional[str] = None, **kwargs: Optional[Any], ) -> Iterator[Output]: """Helper method to transform an Iterator of Input values into an Iterator of Output values, with callbacks. Use this to implement `stream()` or `transform()` in Runnable subclasses.""" input_for_tracing, input_for_transform = tee(input, 2) final_input: Optional[Input] = next(input_for_tracing, None) final_input_supported = True final_output: Optional[Output] = None
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
final_output_supported = True config = ensure_config(config) callback_manager = get_callback_manager_for_config(config) run_manager = callback_manager.on_chain_start( dumpd(self), {"input": ""}, run_type=run_type, name=config.get("run_name"), ) try: if accepts_config(transformer): kwargs["config"] = patch_config( config, callbacks=run_manager.get_child() ) if accepts_run_manager(transformer): kwargs["run_manager"] = run_manager iterator = transformer(input_for_transform, **kwargs) for chunk in iterator: yield chunk if final_output_supported: if final_output is None: final_output = chunk else: try: final_output = final_output + chunk except TypeError: final_output = None final_output_supported = False for ichunk in input_for_tracing: if final_input_supported:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
if final_input is None: final_input = ichunk else: try: final_input = final_input + ichunk except TypeError: final_input = None final_input_supported = False except BaseException as e: run_manager.on_chain_error(e, inputs=final_input) raise else: run_manager.on_chain_end(final_output, inputs=final_input) async def _atransform_stream_with_config( self, input: AsyncIterator[Input], transformer: Union[ Callable[[AsyncIterator[Input]], AsyncIterator[Output]], Callable[ [AsyncIterator[Input], AsyncCallbackManagerForChainRun], AsyncIterator[Output], ], Callable[ [ AsyncIterator[Input], AsyncCallbackManagerForChainRun, RunnableConfig, ], AsyncIterator[Output], ],
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
], config: Optional[RunnableConfig], run_type: Optional[str] = None, **kwargs: Optional[Any], ) -> AsyncIterator[Output]: """Helper method to transform an Async Iterator of Input values into an Async Iterator of Output values, with callbacks. Use this to implement `astream()` or `atransform()` in Runnable subclasses.""" input_for_tracing, input_for_transform = atee(input, 2) final_input: Optional[Input] = await py_anext(input_for_tracing, None) final_input_supported = True final_output: Optional[Output] = None final_output_supported = True config = ensure_config(config) callback_manager = get_async_callback_manager_for_config(config) run_manager = await callback_manager.on_chain_start( dumpd(self), {"input": ""}, run_type=run_type, name=config.get("run_name"), ) try: if accepts_config(transformer): kwargs["config"] = patch_config( config, callbacks=run_manager.get_child() ) if accepts_run_manager(transformer): kwargs["run_manager"] = run_manager
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
iterator = transformer(input_for_transform, **kwargs) async for chunk in iterator: yield chunk if final_output_supported: if final_output is None: final_output = chunk else: try: final_output = final_output + chunk except TypeError: final_output = None final_output_supported = False async for ichunk in input_for_tracing: if final_input_supported: if final_input is None: final_input = ichunk else: try: final_input = final_input + ichunk except TypeError: final_input = None final_input_supported = False except BaseException as e: await run_manager.on_chain_error(e, inputs=final_input) raise else: await run_manager.on_chain_end(final_output, inputs=final_input) class RunnableSerializable(Serializable, Runnable[Input, Output]): """A Runnable that can be serialized to JSON.""" def configurable_fields(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, **kwargs: AnyConfigurableField ) -> RunnableSerializable[Input, Output]: from langchain_core.runnables.configurable import RunnableConfigurableFields for key in kwargs: if key not in self.__fields__: raise ValueError( f"Configuration key {key} not found in {self}: " "available keys are {self.__fields__.keys()}" ) return RunnableConfigurableFields(default=self, fields=kwargs) def configurable_alternatives( self, which: ConfigurableField, *, default_key: str = "default", prefix_keys: bool = False, **kwargs: Union[Runnable[Input, Output], Callable[[], Runnable[Input, Output]]], ) -> RunnableSerializable[Input, Output]: from langchain_core.runnables.configurable import ( RunnableConfigurableAlternatives, ) return RunnableConfigurableAlternatives( which=which, default=self, alternatives=kwargs, default_key=default_key, prefix_keys=prefix_keys, ) class RunnableSequence(RunnableSerializable[Input, Output]):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
"""A sequence of runnables, where the output of each is the input of the next. RunnableSequence is the most important composition operator in LangChain as it is used in virtually every chain. A RunnableSequence can be instantiated directly or more commonly by using the `|` operator where either the left or right operands (or both) must be a Runnable. Any RunnableSequence automatically supports sync, async, batch. The default implementations of `batch` and `abatch` utilize threadpools and asyncio gather and will be faster than naive invocation of invoke or ainvoke for IO bound runnables. Batching is implemented by invoking the batch method on each component of the RunnableSequence in order. A RunnableSequence preserves the streaming properties of its components, so if all components of the sequence implement a `transform` method -- which is the method that implements the logic to map a streaming input to a streaming output -- then the sequence will be able to stream input to output! If any component of the sequence does not implement transform then the streaming will only begin after this component is run. If there are multiple blocking components, streaming begins after the last one.
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
Please note: RunnableLambdas do not support `transform` by default! So if you need to use a RunnableLambdas be careful about where you place them in a RunnableSequence (if you need to use the .stream()/.astream() methods). If you need arbitrary logic and need streaming, you can subclass Runnable, and implement `transform` for whatever logic you need. Here is a simple example that uses simple functions to illustrate the use of RunnableSequence: .. code-block:: python from langchain_core.runnables import RunnableLambda def add_one(x: int) -> int: return x + 1 def mul_two(x: int) -> int: return x * 2 runnable_1 = RunnableLambda(add_one) runnable_2 = RunnableLambda(mul_two) sequence = runnable_1 | runnable_2 # Or equivalently: # sequence = RunnableSequence(first=runnable_1, last=runnable_2) sequence.invoke(1) await runnable.ainvoke(1) sequence.batch([1, 2, 3]) await sequence.abatch([1, 2, 3]) Here's an example that uses streams JSON output generated by an LLM: .. code-block:: python from langchain_core.output_parsers.json import SimpleJsonOutputParser from langchain_core.chat_models.openai import ChatOpenAI prompt = PromptTemplate.from_template( 'In JSON format, give me a list of {topic} and their ' 'corresponding names in French, Spanish and in a ' 'Cat Language.'
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
) model = ChatOpenAI() chain = prompt | model | SimpleJsonOutputParser() async for chunk in chain.astream({'topic': 'colors'}): print('-') print(chunk, sep='', flush=True) """ first: Runnable[Input, Any] """The first runnable in the sequence.""" middle: List[Runnable[Any, Any]] = Field(default_factory=list) """The middle runnables in the sequence.""" last: Runnable[Any, Output] """The last runnable in the sequence.""" @property def steps(self) -> List[Runnable[Any, Any]]: """All the runnables that make up the sequence in order.""" return [self.first] + self.middle + [self.last] @classmethod def is_lc_serializable(cls) -> bool: return True @classmethod def get_lc_namespace(cls) -> List[str]: return cls.__module__.split(".")[:-1] class Config: arbitrary_types_allowed = True @property def InputType(self) -> Type[Input]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
return self.first.InputType @property def OutputType(self) -> Type[Output]: return self.last.OutputType def get_input_schema( self, config: Optional[RunnableConfig] = None ) -> Type[BaseModel]: from langchain_core.runnables.passthrough import RunnableAssign if isinstance(self.first, RunnableAssign): first = cast(RunnableAssign, self.first) next_ = self.middle[0] if self.middle else self.last next_input_schema = next_.get_input_schema(config) if not next_input_schema.__custom_root_type__: return create_model( "RunnableSequenceInput", **{ k: (v.annotation, v.default) for k, v in next_input_schema.__fields__.items() if k not in first.mapper.steps }, ) return self.first.get_input_schema(config) def get_output_schema( self, config: Optional[RunnableConfig] = None ) -> Type[BaseModel]: return self.last.get_output_schema(config) @property def config_specs(self) -> List[ConfigurableFieldSpec]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
return get_unique_config_specs( spec for step in self.steps for spec in step.config_specs ) def __repr__(self) -> str: return "\n| ".join( repr(s) if i == 0 else indent_lines_after_first(repr(s), "| ") for i, s in enumerate(self.steps) ) def __or__( self, other: Union[ Runnable[Any, Other], Callable[[Any], Other], Callable[[Iterator[Any]], Iterator[Other]], Mapping[str, Union[Runnable[Any, Other], Callable[[Any], Other], Any]], ], ) -> RunnableSerializable[Input, Other]: if isinstance(other, RunnableSequence): return RunnableSequence( first=self.first, middle=self.middle + [self.last] + [other.first] + other.middle, last=other.last, ) else: return RunnableSequence( first=self.first, middle=self.middle + [self.last], last=coerce_to_runnable(other), ) def __ror__(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, other: Union[ Runnable[Other, Any], Callable[[Other], Any], Callable[[Iterator[Other]], Iterator[Any]], Mapping[str, Union[Runnable[Other, Any], Callable[[Other], Any], Any]], ], ) -> RunnableSerializable[Other, Output]: if isinstance(other, RunnableSequence): return RunnableSequence( first=other.first, middle=other.middle + [other.last] + [self.first] + self.middle, last=self.last, ) else: return RunnableSequence( first=coerce_to_runnable(other), middle=[self.first] + self.middle, last=self.last, ) def invoke(self, input: Input, config: Optional[RunnableConfig] = None) -> Output:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
config = ensure_config(config) callback_manager = get_callback_manager_for_config(config) run_manager = callback_manager.on_chain_start( dumpd(self), input, name=config.get("run_name") ) try: for i, step in enumerate(self.steps): input = step.invoke( input, patch_config( config, callbacks=run_manager.get_child(f"seq:step:{i+1}") ), ) except BaseException as e: run_manager.on_chain_error(e) raise else: run_manager.on_chain_end(input) return cast(Output, input) async def ainvoke(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any], ) -> Output: config = ensure_config(config) callback_manager = get_async_callback_manager_for_config(config) run_manager = await callback_manager.on_chain_start( dumpd(self), input, name=config.get("run_name") ) try: for i, step in enumerate(self.steps): input = await step.ainvoke( input, patch_config( config, callbacks=run_manager.get_child(f"seq:step:{i+1}") ), ) except BaseException as e: await run_manager.on_chain_error(e) raise else: await run_manager.on_chain_end(input) return cast(Output, input) def batch(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any], ) -> List[Output]: from langchain_core.callbacks.manager import CallbackManager if not inputs: return [] configs = get_config_list(config, len(inputs)) callback_managers = [ CallbackManager.configure( inheritable_callbacks=config.get("callbacks"), local_callbacks=None, verbose=False, inheritable_tags=config.get("tags"), local_tags=None, inheritable_metadata=config.get("metadata"), local_metadata=None, ) for config in configs
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
] run_managers = [ cm.on_chain_start( dumpd(self), input, name=config.get("run_name"), ) for cm, input, config in zip(callback_managers, inputs, configs) ] try: if return_exceptions: failed_inputs_map: Dict[int, Exception] = {} for stepidx, step in enumerate(self.steps): remaining_idxs = [ i for i in range(len(configs)) if i not in failed_inputs_map ] inputs = step.batch( [ inp for i, inp in zip(remaining_idxs, inputs) if i not in failed_inputs_map ],
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
[ patch_config( config, callbacks=rm.get_child(f"seq:step:{stepidx+1}") ) for i, (rm, config) in enumerate(zip(run_managers, configs)) if i not in failed_inputs_map ], return_exceptions=return_exceptions, **kwargs, ) for i, inp in zip(remaining_idxs, inputs): if isinstance(inp, Exception): failed_inputs_map[i] = inp inputs = [inp for inp in inputs if not isinstance(inp, Exception)] if len(failed_inputs_map) == len(configs): break inputs_copy = inputs.copy() inputs = [] for i in range(len(configs)): if i in failed_inputs_map: inputs.append(cast(Input, failed_inputs_map[i])) else: inputs.append(inputs_copy.pop(0)) else: for i, step in enumerate(self.steps): inputs = step.batch(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
inputs, [ patch_config( config, callbacks=rm.get_child(f"seq:step:{i+1}") ) for rm, config in zip(run_managers, configs) ], ) except BaseException as e: for rm in run_managers: rm.on_chain_error(e) if return_exceptions: return cast(List[Output], [e for _ in inputs]) else: raise else: first_exception: Optional[Exception] = None for run_manager, out in zip(run_managers, inputs): if isinstance(out, Exception): first_exception = first_exception or out run_manager.on_chain_error(out) else: run_manager.on_chain_end(dumpd(out)) if return_exceptions or first_exception is None: return cast(List[Output], inputs) else: raise first_exception async def abatch(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any], ) -> List[Output]: from langchain_core.callbacks.manager import ( AsyncCallbackManager, ) if not inputs: return [] configs = get_config_list(config, len(inputs)) callback_managers = [ AsyncCallbackManager.configure( inheritable_callbacks=config.get("callbacks"), local_callbacks=None, verbose=False, inheritable_tags=config.get("tags"), local_tags=None, inheritable_metadata=config.get("metadata"), local_metadata=None, ) for config in configs ] run_managers: List[AsyncCallbackManagerForChainRun] = await asyncio.gather( *(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
cm.on_chain_start( dumpd(self), input, name=config.get("run_name"), ) for cm, input, config in zip(callback_managers, inputs, configs) ) ) try: if return_exceptions: failed_inputs_map: Dict[int, Exception] = {} for stepidx, step in enumerate(self.steps): remaining_idxs = [ i for i in range(len(configs)) if i not in failed_inputs_map ] inputs = await step.abatch( [ inp for i, inp in zip(remaining_idxs, inputs) if i not in failed_inputs_map ], [
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
patch_config( config, callbacks=rm.get_child(f"seq:step:{stepidx+1}") ) for i, (rm, config) in enumerate(zip(run_managers, configs)) if i not in failed_inputs_map ], return_exceptions=return_exceptions, **kwargs, ) for i, inp in zip(remaining_idxs, inputs): if isinstance(inp, Exception): failed_inputs_map[i] = inp inputs = [inp for inp in inputs if not isinstance(inp, Exception)] if len(failed_inputs_map) == len(configs): break inputs_copy = inputs.copy() inputs = [] for i in range(len(configs)): if i in failed_inputs_map: inputs.append(cast(Input, failed_inputs_map[i])) else: inputs.append(inputs_copy.pop(0)) else: for i, step in enumerate(self.steps): inputs = await step.abatch( inputs,
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
[ patch_config( config, callbacks=rm.get_child(f"seq:step:{i+1}") ) for rm, config in zip(run_managers, configs) ], ) except BaseException as e: await asyncio.gather(*(rm.on_chain_error(e) for rm in run_managers)) if return_exceptions: return cast(List[Output], [e for _ in inputs]) else: raise else: first_exception: Optional[Exception] = None coros: List[Awaitable[None]] = [] for run_manager, out in zip(run_managers, inputs): if isinstance(out, Exception): first_exception = first_exception or out coros.append(run_manager.on_chain_error(out)) else: coros.append(run_manager.on_chain_end(dumpd(out))) await asyncio.gather(*coros) if return_exceptions or first_exception is None: return cast(List[Output], inputs) else: raise first_exception def _transform(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, input: Iterator[Input], run_manager: CallbackManagerForChainRun, config: RunnableConfig, ) -> Iterator[Output]: steps = [self.first] + self.middle + [self.last] final_pipeline = cast(Iterator[Output], input) for step in steps: final_pipeline = step.transform( final_pipeline, patch_config( config, callbacks=run_manager.get_child(f"seq:step:{steps.index(step)+1}"), ), ) for output in final_pipeline: yield output async def _atransform(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, input: AsyncIterator[Input], run_manager: AsyncCallbackManagerForChainRun, config: RunnableConfig, ) -> AsyncIterator[Output]: steps = [self.first] + self.middle + [self.last] final_pipeline = cast(AsyncIterator[Output], input) for step in steps: final_pipeline = step.atransform( final_pipeline, patch_config( config, callbacks=run_manager.get_child(f"seq:step:{steps.index(step)+1}"), ), ) async for output in final_pipeline: yield output def transform(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any], ) -> Iterator[Output]: yield from self._transform_stream_with_config( input, self._transform, config, **kwargs ) def stream(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any], ) -> Iterator[Output]: yield from self.transform(iter([input]), config, **kwargs) async def atransform( self, input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Optional[Any], ) -> AsyncIterator[Output]: async for chunk in self._atransform_stream_with_config( input, self._atransform, config, **kwargs ): yield chunk async def astream( self, input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any], ) -> AsyncIterator[Output]: async def input_aiter() -> AsyncIterator[Input]: yield input async for chunk in self.atransform(input_aiter(), config, **kwargs): yield chunk class RunnableParallel(RunnableSerializable[Input, Dict[str, Any]]):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
""" A runnable that runs a mapping of runnables in parallel, and returns a mapping of their outputs. """ steps: Mapping[str, Runnable[Input, Any]] def __init__( self, __steps: Optional[ Mapping[ str, Union[ Runnable[Input, Any], Callable[[Input], Any], Mapping[str, Union[Runnable[Input, Any], Callable[[Input], Any]]], ], ] ] = None, **kwargs: Union[ Runnable[Input, Any], Callable[[Input], Any], Mapping[str, Union[Runnable[Input, Any], Callable[[Input], Any]]], ], ) -> None: merged = {**__steps} if __steps is not None else {} merged.update(kwargs) super().__init__( steps={key: coerce_to_runnable(r) for key, r in merged.items()} ) @classmethod def is_lc_serializable(cls) -> bool:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
return True @classmethod def get_lc_namespace(cls) -> List[str]: return cls.__module__.split(".")[:-1] class Config: arbitrary_types_allowed = True @property def InputType(self) -> Any: for step in self.steps.values(): if step.InputType: return step.InputType return Any def get_input_schema(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, config: Optional[RunnableConfig] = None ) -> Type[BaseModel]: if all( s.get_input_schema(config).schema().get("type", "object") == "object" for s in self.steps.values() ): return create_model( "RunnableParallelInput", **{ k: (v.annotation, v.default) for step in self.steps.values() for k, v in step.get_input_schema(config).__fields__.items() if k != "__root__" }, ) return super().get_input_schema(config) def get_output_schema( self, config: Optional[RunnableConfig] = None ) -> Type[BaseModel]: return create_model( "RunnableParallelOutput", **{k: (v.OutputType, None) for k, v in self.steps.items()}, ) @property def config_specs(self) -> List[ConfigurableFieldSpec]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
return get_unique_config_specs( spec for step in self.steps.values() for spec in step.config_specs ) def __repr__(self) -> str: map_for_repr = ",\n ".join( f"{k}: {indent_lines_after_first(repr(v), ' ' + k + ': ')}" for k, v in self.steps.items() ) return "{\n " + map_for_repr + "\n}" def invoke( self, input: Input, config: Optional[RunnableConfig] = None ) -> Dict[str, Any]: from langchain_core.callbacks.manager import CallbackManager config = ensure_config(config) callback_manager = CallbackManager.configure( inheritable_callbacks=config.get("callbacks"), local_callbacks=None, verbose=False, inheritable_tags=config.get("tags"), local_tags=None, inheritable_metadata=config.get("metadata"), local_metadata=None, )
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
run_manager = callback_manager.on_chain_start( dumpd(self), input, name=config.get("run_name") ) try: steps = dict(self.steps) with get_executor_for_config(config) as executor: futures = [ executor.submit( step.invoke, input, patch_config( config, callbacks=run_manager.get_child(f"map:key:{key}"), ), ) for key, step in steps.items() ] output = {key: future.result() for key, future in zip(steps, futures)} except BaseException as e: run_manager.on_chain_error(e) raise else: run_manager.on_chain_end(output) return output async def ainvoke(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any], ) -> Dict[str, Any]: config = ensure_config(config) callback_manager = get_async_callback_manager_for_config(config) run_manager = await callback_manager.on_chain_start( dumpd(self), input, name=config.get("run_name") ) try: steps = dict(self.steps) results = await asyncio.gather( *( step.ainvoke( input, patch_config( config, callbacks=run_manager.get_child(f"map:key:{key}")
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
), ) for key, step in steps.items() ) ) output = {key: value for key, value in zip(steps, results)} except BaseException as e: await run_manager.on_chain_error(e) raise else: await run_manager.on_chain_end(output) return output def _transform( self, input: Iterator[Input], run_manager: CallbackManagerForChainRun, config: RunnableConfig, ) -> Iterator[AddableDict]: steps = dict(self.steps) input_copies = list(safetee(input, len(steps), lock=threading.Lock())) with get_executor_for_config(config) as executor: named_generators = [ ( name, step.transform(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
input_copies.pop(), patch_config( config, callbacks=run_manager.get_child(f"map:key:{name}") ), ), ) for name, step in steps.items() ] futures = { executor.submit(next, generator): (step_name, generator) for step_name, generator in named_generators } while futures: completed_futures, _ = wait(futures, return_when=FIRST_COMPLETED) for future in completed_futures: (step_name, generator) = futures.pop(future) try: chunk = AddableDict({step_name: future.result()}) yield chunk futures[executor.submit(next, generator)] = ( step_name, generator, ) except StopIteration: pass def transform(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Any, ) -> Iterator[Dict[str, Any]]: yield from self._transform_stream_with_config( input, self._transform, config, **kwargs ) def stream( self, input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any], ) -> Iterator[Dict[str, Any]]: yield from self.transform(iter([input]), config) async def _atransform(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, input: AsyncIterator[Input], run_manager: AsyncCallbackManagerForChainRun, config: RunnableConfig, ) -> AsyncIterator[AddableDict]: steps = dict(self.steps) input_copies = list(atee(input, len(steps), lock=asyncio.Lock())) named_generators = [ ( name, step.atransform( input_copies.pop(), patch_config( config, callbacks=run_manager.get_child(f"map:key:{name}") ), ), ) for name, step in steps.items() ] async def get_next_chunk(generator: AsyncIterator) -> Optional[Output]:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
return await py_anext(generator) tasks = { asyncio.create_task(get_next_chunk(generator)): (step_name, generator) for step_name, generator in named_generators } while tasks: completed_tasks, _ = await asyncio.wait( tasks, return_when=asyncio.FIRST_COMPLETED ) for task in completed_tasks: (step_name, generator) = tasks.pop(task) try: chunk = AddableDict({step_name: task.result()}) yield chunk new_task = asyncio.create_task(get_next_chunk(generator)) tasks[new_task] = (step_name, generator) except StopAsyncIteration: pass async def atransform(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Any, ) -> AsyncIterator[Dict[str, Any]]: async for chunk in self._atransform_stream_with_config( input, self._atransform, config, **kwargs ): yield chunk async def astream( self, input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any], ) -> AsyncIterator[Dict[str, Any]]: async def input_aiter() -> AsyncIterator[Input]: yield input async for chunk in self.atransform(input_aiter(), config): yield chunk RunnableMap = RunnableParallel class RunnableGenerator(Runnable[Input, Output]):
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
""" A runnable that runs a generator function. """ def __init__( self, transform: Union[ Callable[[Iterator[Input]], Iterator[Output]], Callable[[AsyncIterator[Input]], AsyncIterator[Output]], ], atransform: Optional[ Callable[[AsyncIterator[Input]], AsyncIterator[Output]] ] = None, ) -> None: if atransform is not None: self._atransform = atransform if inspect.isasyncgenfunction(transform): self._atransform = transform elif inspect.isgeneratorfunction(transform): self._transform = transform else: raise TypeError( "Expected a generator function type for `transform`." f"Instead got an unsupported type: {type(transform)}" ) @property def InputType(self) -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
func = getattr(self, "_transform", None) or getattr(self, "_atransform") try: params = inspect.signature(func).parameters first_param = next(iter(params.values()), None) if first_param and first_param.annotation != inspect.Parameter.empty: return getattr(first_param.annotation, "__args__", (Any,))[0] else: return Any except ValueError: return Any @property def OutputType(self) -> Any:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
func = getattr(self, "_transform", None) or getattr(self, "_atransform") try: sig = inspect.signature(func) return ( getattr(sig.return_annotation, "__args__", (Any,))[0] if sig.return_annotation != inspect.Signature.empty else Any ) except ValueError: return Any def __eq__(self, other: Any) -> bool: if isinstance(other, RunnableGenerator): if hasattr(self, "_transform") and hasattr(other, "_transform"): return self._transform == other._transform elif hasattr(self, "_atransform") and hasattr(other, "_atransform"): return self._atransform == other._atransform else: return False else: return False def __repr__(self) -> str:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
return "RunnableGenerator(...)" def transform( self, input: Iterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Any, ) -> Iterator[Output]: return self._transform_stream_with_config( input, self._transform, config, **kwargs ) def stream( self, input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any, ) -> Iterator[Output]: return self.transform(iter([input]), config, **kwargs) def invoke( self, input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any ) -> Output: final = None for output in self.stream(input, config, **kwargs): if final is None: final = output else: final = final + output return cast(Output, final) def atransform(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, input: AsyncIterator[Input], config: Optional[RunnableConfig] = None, **kwargs: Any, ) -> AsyncIterator[Output]: if not hasattr(self, "_atransform"): raise NotImplementedError("This runnable does not support async methods.") return self._atransform_stream_with_config( input, self._atransform, config, **kwargs ) def astream( self, input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any, ) -> AsyncIterator[Output]: async def input_aiter() -> AsyncIterator[Input]: yield input return self.atransform(input_aiter(), config, **kwargs) async def ainvoke( self, input: Input, config: Optional[RunnableConfig] = None, **kwargs: Any ) -> Output: final = None async for output in self.astream(input, config, **kwargs): if final is None:
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
final = output else: final = final + output return cast(Output, final) class RunnableLambda(Runnable[Input, Output]): """RunnableLambda converts a python callable into a Runnable. Wrapping a callable in a RunnableLambda makes the callable usable within either a sync or async context. RunnableLambda can be composed as any other Runnable and provides seamless integration with LangChain tracing. Examples: .. code-block:: python # This is a RunnableLambda from langchain_core.runnables import RunnableLambda def add_one(x: int) -> int: return x + 1 runnable = RunnableLambda(add_one) runnable.invoke(1) # returns 2 runnable.batch([1, 2, 3]) # returns [2, 3, 4] # Async is supported by default by delegating to the sync implementation await runnable.ainvoke(1) # returns 2 await runnable.abatch([1, 2, 3]) # returns [2, 3, 4] # Alternatively, can provide both synd and sync implementations async def add_one_async(x: int) -> int: return x + 1 runnable = RunnableLambda(add_one, afunc=add_one_async) runnable.invoke(1) # Uses add_one await runnable.ainvoke(1) # Uses add_one_async """ def __init__(
closed
langchain-ai/langchain
https://github.com/langchain-ai/langchain
13,407
RunnableLambda: returned runnable called synchronously when using ainvoke
### System Info langchain on master branch ### Who can help? _No response_ ### Information - [ ] The official example notebooks/scripts - [X] My own modified scripts ### Related Components - [ ] LLMs/Chat Models - [ ] Embedding Models - [ ] Prompts / Prompt Templates / Prompt Selectors - [ ] Output Parsers - [ ] Document Loaders - [ ] Vector Stores / Retrievers - [ ] Memory - [ ] Agents / Agent Executors - [ ] Tools / Toolkits - [X] Chains - [ ] Callbacks/Tracing - [X] Async ### Reproduction ``` from langchain.schema.runnable import RunnableLambda import asyncio def idchain_sync(__input): print(f'sync chain call: {__input}') return __input async def idchain_async(__input): print(f'async chain call: {__input}') return __input idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async) def func(__input): return idchain asyncio.run(RunnableLambda(func).ainvoke('toto')) #printss 'sync chain call: toto' instead of 'async chain call: toto' ``` ### Expected behavior LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke... When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke; However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
https://github.com/langchain-ai/langchain/issues/13407
https://github.com/langchain-ai/langchain/pull/13408
391f200eaab9af587eea902b264f2d3af929b268
e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5
"2023-11-15T17:27:49Z"
python
"2023-11-28T11:18:26Z"
libs/core/langchain_core/runnables/base.py
self, func: Union[ Union[ Callable[[Input], Output], Callable[[Input, RunnableConfig], Output], Callable[[Input, CallbackManagerForChainRun], Output], Callable[[Input, CallbackManagerForChainRun, RunnableConfig], Output], ], Union[ Callable[[Input], Awaitable[Output]], Callable[[Input, RunnableConfig], Awaitable[Output]], Callable[[Input, AsyncCallbackManagerForChainRun], Awaitable[Output]], Callable[ [Input, AsyncCallbackManagerForChainRun, RunnableConfig], Awaitable[Output], ], ], ], afunc: Optional[ Union[ Callable[[Input], Awaitable[Output]], Callable[[Input, RunnableConfig], Awaitable[Output]], Callable[[Input, AsyncCallbackManagerForChainRun], Awaitable[Output]], Callable[ [Input, AsyncCallbackManagerForChainRun, RunnableConfig], Awaitable[Output],