status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
233
| body
stringlengths 0
186k
⌀ | issue_url
stringlengths 38
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
unknown | language
stringclasses 5
values | commit_datetime
unknown | updated_file
stringlengths 7
188
| chunk_content
stringlengths 1
1.03M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | {"question": "What is your name?"}, dict(callbacks=[tracer])
) == ["baz", "qux"]
if sys.version_info >= (3, 9):
assert tracer.runs == snapshot
@freeze_time("2023-01-01")
def test_seq_dict_prompt_llm(
mocker: MockerFixture, snapshot: SnapshotAssertion
) -> None:
passthrough = mocker.Mock(side_effect=lambda x: x)
retriever = FakeRetriever()
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ """Context:
{documents}
Question:
{question}"""
)
chat = FakeListChatModel(responses=["foo, bar"])
parser = CommaSeparatedListOutputParser()
chain: Runnable = (
{
"question": RunnablePassthrough[str]() | passthrough,
"documents": passthrough | retriever,
"just_to_test_lambda": passthrough,
}
| prompt
| chat
| parser
)
assert repr(chain) == snapshot |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | assert isinstance(chain, RunnableSequence)
assert isinstance(chain.first, RunnableParallel)
assert chain.middle == [prompt, chat]
assert chain.last == parser
assert dumps(chain, pretty=True) == snapshot
prompt_spy = mocker.spy(prompt.__class__, "invoke")
chat_spy = mocker.spy(chat.__class__, "invoke")
parser_spy = mocker.spy(parser.__class__, "invoke")
tracer = FakeTracer()
assert chain.invoke("What is your name?", dict(callbacks=[tracer])) == [
"foo",
"bar",
]
assert prompt_spy.call_args.args[1] == {
"documents": [Document(page_content="foo"), Document(page_content="bar")],
"question": "What is your name?",
"just_to_test_lambda": "What is your name?",
}
assert chat_spy.call_args.args[1] == ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(
content="""Context:
[Document(page_content='foo'), Document(page_content='bar')]
Question:
What is your name?"""
),
]
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | assert parser_spy.call_args.args[1] == AIMessage(content="foo, bar")
assert len([r for r in tracer.runs if r.parent_run_id is None]) == 1
parent_run = next(r for r in tracer.runs if r.parent_run_id is None)
assert len(parent_run.child_runs) == 4
map_run = parent_run.child_runs[0]
assert map_run.name == "RunnableParallel"
assert len(map_run.child_runs) == 3
@freeze_time("2023-01-01")
def test_seq_prompt_dict(mocker: MockerFixture, snapshot: SnapshotAssertion) -> None:
passthrough = mocker.Mock(side_effect=lambda x: x)
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
chat = FakeListChatModel(responses=["i'm a chatbot"])
llm = FakeListLLM(responses=["i'm a textbot"])
chain = (
prompt
| passthrough
| {
"chat": chat,
"llm": llm,
}
)
assert repr(chain) == snapshot
assert isinstance(chain, RunnableSequence)
assert chain.first == prompt
assert chain.middle == [RunnableLambda(passthrough)]
assert isinstance(chain.last, RunnableParallel)
assert dumps(chain, pretty=True) == snapshot |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | prompt_spy = mocker.spy(prompt.__class__, "invoke")
chat_spy = mocker.spy(chat.__class__, "invoke")
llm_spy = mocker.spy(llm.__class__, "invoke")
tracer = FakeTracer()
assert chain.invoke(
{"question": "What is your name?"}, dict(callbacks=[tracer])
) == {
"chat": AIMessage(content="i'm a chatbot"),
"llm": "i'm a textbot",
}
assert prompt_spy.call_args.args[1] == {"question": "What is your name?"}
assert chat_spy.call_args.args[1] == ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"),
]
)
assert llm_spy.call_args.args[1] == ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"),
]
)
assert len([r for r in tracer.runs if r.parent_run_id is None]) == 1
parent_run = next(r for r in tracer.runs if r.parent_run_id is None)
assert len(parent_run.child_runs) == 3
map_run = parent_run.child_runs[2]
assert map_run.name == "RunnableParallel"
assert len(map_run.child_runs) == 2 |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | @freeze_time("2023-01-01")
async def test_router_runnable(
mocker: MockerFixture, snapshot: SnapshotAssertion
) -> None:
chain1: Runnable = ChatPromptTemplate.from_template(
"You are a math genius. Answer the question: {question}"
) | FakeListLLM(responses=["4"])
chain2: Runnable = ChatPromptTemplate.from_template(
"You are an english major. Answer the question: {question}"
) | FakeListLLM(responses=["2"])
router: Runnable = RouterRunnable({"math": chain1, "english": chain2})
chain: Runnable = {
"key": lambda x: x["key"],
"input": {"question": lambda x: x["question"]},
} | router
assert dumps(chain, pretty=True) == snapshot
result = chain.invoke({"key": "math", "question": "2 + 2"})
assert result == "4"
result2 = chain.batch(
[{"key": "math", "question": "2 + 2"}, {"key": "english", "question": "2 + 2"}]
)
assert result2 == ["4", "2"]
result = await chain.ainvoke({"key": "math", "question": "2 + 2"})
assert result == "4"
result2 = await chain.abatch(
[{"key": "math", "question": "2 + 2"}, {"key": "english", "question": "2 + 2"}]
)
assert result2 == ["4", "2"]
router_spy = mocker.spy(router.__class__, "invoke") |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | tracer = FakeTracer()
assert (
chain.invoke({"key": "math", "question": "2 + 2"}, dict(callbacks=[tracer]))
== "4"
)
assert router_spy.call_args.args[1] == {
"key": "math",
"input": {"question": "2 + 2"},
}
assert len([r for r in tracer.runs if r.parent_run_id is None]) == 1
parent_run = next(r for r in tracer.runs if r.parent_run_id is None)
assert len(parent_run.child_runs) == 2
router_run = parent_run.child_runs[1]
assert router_run.name == "RunnableSequence"
assert len(router_run.child_runs) == 2
@freeze_time("2023-01-01")
async def test_higher_order_lambda_runnable(
mocker: MockerFixture, snapshot: SnapshotAssertion
) -> None:
math_chain: Runnable = ChatPromptTemplate.from_template(
"You are a math genius. Answer the question: {question}"
) | FakeListLLM(responses=["4"])
english_chain: Runnable = ChatPromptTemplate.from_template(
"You are an english major. Answer the question: {question}"
) | FakeListLLM(responses=["2"])
input_map: Runnable = RunnableParallel(
key=lambda x: x["key"],
input={"question": lambda x: x["question"]},
)
def router(input: Dict[str, Any]) -> Runnable: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | if input["key"] == "math":
return itemgetter("input") | math_chain
elif input["key"] == "english":
return itemgetter("input") | english_chain
else:
raise ValueError(f"Unknown key: {input['key']}")
chain: Runnable = input_map | router
if sys.version_info >= (3, 9):
assert dumps(chain, pretty=True) == snapshot
result = chain.invoke({"key": "math", "question": "2 + 2"})
assert result == "4"
result2 = chain.batch(
[{"key": "math", "question": "2 + 2"}, {"key": "english", "question": "2 + 2"}] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | )
assert result2 == ["4", "2"]
result = await chain.ainvoke({"key": "math", "question": "2 + 2"})
assert result == "4"
result2 = await chain.abatch(
[{"key": "math", "question": "2 + 2"}, {"key": "english", "question": "2 + 2"}]
)
assert result2 == ["4", "2"]
math_spy = mocker.spy(math_chain.__class__, "invoke")
tracer = FakeTracer()
assert (
chain.invoke({"key": "math", "question": "2 + 2"}, dict(callbacks=[tracer]))
== "4"
)
assert math_spy.call_args.args[1] == {
"key": "math",
"input": {"question": "2 + 2"},
}
assert len([r for r in tracer.runs if r.parent_run_id is None]) == 1
parent_run = next(r for r in tracer.runs if r.parent_run_id is None)
assert len(parent_run.child_runs) == 2
router_run = parent_run.child_runs[1]
assert router_run.name == "router"
assert len(router_run.child_runs) == 1
math_run = router_run.child_runs[0]
assert math_run.name == "RunnableSequence"
assert len(math_run.child_runs) == 3
async def arouter(input: Dict[str, Any]) -> Runnable: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | if input["key"] == "math":
return itemgetter("input") | math_chain
elif input["key"] == "english":
return itemgetter("input") | english_chain
else:
raise ValueError(f"Unknown key: {input['key']}")
achain: Runnable = input_map | arouter
math_spy = mocker.spy(math_chain.__class__, "ainvoke")
tracer = FakeTracer()
assert (
await achain.ainvoke(
{"key": "math", "question": "2 + 2"}, dict(callbacks=[tracer])
)
== "4"
)
assert math_spy.call_args.args[1] == {
"key": "math",
"input": {"question": "2 + 2"},
}
assert len([r for r in tracer.runs if r.parent_run_id is None]) == 1
parent_run = next(r for r in tracer.runs if r.parent_run_id is None)
assert len(parent_run.child_runs) == 2
router_run = parent_run.child_runs[1]
assert router_run.name == "arouter"
assert len(router_run.child_runs) == 1
math_run = router_run.child_runs[0]
assert math_run.name == "RunnableSequence"
assert len(math_run.child_runs) == 3 |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | @freeze_time("2023-01-01")
def test_seq_prompt_map(mocker: MockerFixture, snapshot: SnapshotAssertion) -> None:
passthrough = mocker.Mock(side_effect=lambda x: x)
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
chat = FakeListChatModel(responses=["i'm a chatbot"])
llm = FakeListLLM(responses=["i'm a textbot"])
chain = (
prompt
| passthrough
| {
"chat": chat.bind(stop=["Thought:"]),
"llm": llm,
"passthrough": passthrough,
}
)
assert isinstance(chain, RunnableSequence)
assert chain.first == prompt
assert chain.middle == [RunnableLambda(passthrough)]
assert isinstance(chain.last, RunnableParallel)
assert dumps(chain, pretty=True) == snapshot
prompt_spy = mocker.spy(prompt.__class__, "invoke")
chat_spy = mocker.spy(chat.__class__, "invoke")
llm_spy = mocker.spy(llm.__class__, "invoke")
tracer = FakeTracer()
assert chain.invoke(
{"question": "What is your name?"}, dict(callbacks=[tracer]) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | ) == {
"chat": AIMessage(content="i'm a chatbot"),
"llm": "i'm a textbot",
"passthrough": ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"),
]
),
}
assert prompt_spy.call_args.args[1] == {"question": "What is your name?"}
assert chat_spy.call_args.args[1] == ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"),
]
)
assert llm_spy.call_args.args[1] == ChatPromptValue(
messages=[
SystemMessage(content="You are a nice assistant."),
HumanMessage(content="What is your name?"),
]
)
assert len([r for r in tracer.runs if r.parent_run_id is None]) == 1
parent_run = next(r for r in tracer.runs if r.parent_run_id is None)
assert len(parent_run.child_runs) == 3
map_run = parent_run.child_runs[2]
assert map_run.name == "RunnableParallel"
assert len(map_run.child_runs) == 3
def test_map_stream() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
chat_res = "i'm a chatbot"
chat = FakeListChatModel(responses=[chat_res], sleep=0.01)
llm_res = "i'm a textbot"
llm = FakeStreamingListLLM(responses=[llm_res], sleep=0.01)
chain: Runnable = prompt | {
"chat": chat.bind(stop=["Thought:"]),
"llm": llm,
"passthrough": RunnablePassthrough(),
}
stream = chain.stream({"question": "What is your name?"})
final_value = None
streamed_chunks = [] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | for chunk in stream:
streamed_chunks.append(chunk)
if final_value is None:
final_value = chunk
else:
final_value += chunk
assert streamed_chunks[0] in [
{"passthrough": prompt.invoke({"question": "What is your name?"})},
{"llm": "i"},
{"chat": AIMessageChunk(content="i")},
]
assert len(streamed_chunks) == len(chat_res) + len(llm_res) + 1
assert all(len(c.keys()) == 1 for c in streamed_chunks)
assert final_value is not None
assert final_value.get("chat").content == "i'm a chatbot"
assert final_value.get("llm") == "i'm a textbot"
assert final_value.get("passthrough") == prompt.invoke(
{"question": "What is your name?"}
)
def test_map_stream_iterator_input() -> None:
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
chat_res = "i'm a chatbot"
chat = FakeListChatModel(responses=[chat_res], sleep=0.01)
llm_res = "i'm a textbot"
llm = FakeStreamingListLLM(responses=[llm_res], sleep=0.01) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | chain: Runnable = (
prompt
| llm
| {
"chat": chat.bind(stop=["Thought:"]),
"llm": llm,
"passthrough": RunnablePassthrough(),
}
)
stream = chain.stream({"question": "What is your name?"})
final_value = None
streamed_chunks = []
for chunk in stream:
streamed_chunks.append(chunk)
if final_value is None:
final_value = chunk
else:
final_value += chunk
assert streamed_chunks[0] in [
{"passthrough": "i"},
{"llm": "i"},
{"chat": AIMessageChunk(content="i")},
]
assert len(streamed_chunks) == len(chat_res) + len(llm_res) + len(llm_res)
assert all(len(c.keys()) == 1 for c in streamed_chunks)
assert final_value is not None
assert final_value.get("chat").content == "i'm a chatbot"
assert final_value.get("llm") == "i'm a textbot"
assert final_value.get("passthrough") == "i'm a textbot"
async def test_map_astream() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
chat_res = "i'm a chatbot"
chat = FakeListChatModel(responses=[chat_res], sleep=0.01)
llm_res = "i'm a textbot"
llm = FakeStreamingListLLM(responses=[llm_res], sleep=0.01)
chain: Runnable = prompt | {
"chat": chat.bind(stop=["Thought:"]),
"llm": llm,
"passthrough": RunnablePassthrough(),
}
stream = chain.astream({"question": "What is your name?"})
final_value = None
streamed_chunks = []
async for chunk in stream:
streamed_chunks.append(chunk)
if final_value is None:
final_value = chunk
else:
final_value += chunk
assert streamed_chunks[0] in [
{"passthrough": prompt.invoke({"question": "What is your name?"})},
{"llm": "i"}, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | {"chat": AIMessageChunk(content="i")},
]
assert len(streamed_chunks) == len(chat_res) + len(llm_res) + 1
assert all(len(c.keys()) == 1 for c in streamed_chunks)
assert final_value is not None
assert final_value.get("chat").content == "i'm a chatbot"
assert final_value.get("llm") == "i'm a textbot"
assert final_value.get("passthrough") == prompt.invoke(
{"question": "What is your name?"}
)
final_state = None
streamed_ops = []
async for chunk in chain.astream_log({"question": "What is your name?"}):
streamed_ops.extend(chunk.ops)
if final_state is None:
final_state = chunk
else:
final_state += chunk
final_state = cast(RunLog, final_state)
assert final_state.state["final_output"] == final_value
assert len(final_state.state["streamed_output"]) == len(streamed_chunks)
assert isinstance(final_state.state["id"], str)
assert len(final_state.ops) == len(streamed_ops)
assert len(final_state.state["logs"]) == 5
assert (
final_state.state["logs"]["ChatPromptTemplate"]["name"] == "ChatPromptTemplate"
)
assert final_state.state["logs"]["ChatPromptTemplate"][
"final_output" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | ] == prompt.invoke({"question": "What is your name?"})
assert final_state.state["logs"]["RunnableParallel"]["name"] == "RunnableParallel"
assert sorted(final_state.state["logs"]) == [
"ChatPromptTemplate",
"FakeListChatModel",
"FakeStreamingListLLM",
"RunnableParallel",
"RunnablePassthrough",
]
final_state = None
async for chunk in chain.astream_log(
{"question": "What is your name?"}, include_names=["FakeListChatModel"]
):
if final_state is None:
final_state = chunk
else:
final_state += chunk
final_state = cast(RunLog, final_state)
assert final_state.state["final_output"] == final_value
assert len(final_state.state["streamed_output"]) == len(streamed_chunks)
assert len(final_state.state["logs"]) == 1
assert final_state.state["logs"]["FakeListChatModel"]["name"] == "FakeListChatModel"
final_state = None
async for chunk in chain.astream_log(
{"question": "What is your name?"}, exclude_names=["FakeListChatModel"]
):
if final_state is None:
final_state = chunk |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | else:
final_state += chunk
final_state = cast(RunLog, final_state)
assert final_state.state["final_output"] == final_value
assert len(final_state.state["streamed_output"]) == len(streamed_chunks)
assert len(final_state.state["logs"]) == 4
assert (
final_state.state["logs"]["ChatPromptTemplate"]["name"] == "ChatPromptTemplate"
)
assert final_state.state["logs"]["ChatPromptTemplate"]["final_output"] == (
prompt.invoke({"question": "What is your name?"})
)
assert final_state.state["logs"]["RunnableParallel"]["name"] == "RunnableParallel"
assert sorted(final_state.state["logs"]) == [
"ChatPromptTemplate",
"FakeStreamingListLLM",
"RunnableParallel",
"RunnablePassthrough",
]
async def test_map_astream_iterator_input() -> None:
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
chat_res = "i'm a chatbot"
chat = FakeListChatModel(responses=[chat_res], sleep=0.01)
llm_res = "i'm a textbot"
llm = FakeStreamingListLLM(responses=[llm_res], sleep=0.01) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | chain: Runnable = (
prompt
| llm
| {
"chat": chat.bind(stop=["Thought:"]),
"llm": llm,
"passthrough": RunnablePassthrough(),
}
)
stream = chain.astream({"question": "What is your name?"})
final_value = None
streamed_chunks = []
async for chunk in stream:
streamed_chunks.append(chunk)
if final_value is None:
final_value = chunk
else:
final_value += chunk
assert streamed_chunks[0] in [
{"passthrough": "i"},
{"llm": "i"},
{"chat": AIMessageChunk(content="i")},
]
assert len(streamed_chunks) == len(chat_res) + len(llm_res) + len(llm_res)
assert all(len(c.keys()) == 1 for c in streamed_chunks)
assert final_value is not None
assert final_value.get("chat").content == "i'm a chatbot"
assert final_value.get("llm") == "i'm a textbot"
assert final_value.get("passthrough") == llm_res
def test_with_config_with_config() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | llm = FakeListLLM(responses=["i'm a textbot"])
assert dumpd(
llm.with_config({"metadata": {"a": "b"}}).with_config(tags=["a-tag"])
) == dumpd(llm.with_config({"metadata": {"a": "b"}, "tags": ["a-tag"]}))
def test_metadata_is_merged() -> None:
"""Test metadata and tags defined in with_config and at are merged/concatend."""
foo = RunnableLambda(lambda x: x).with_config({"metadata": {"my_key": "my_value"}})
expected_metadata = {
"my_key": "my_value",
"my_other_key": "my_other_value",
}
with collect_runs() as cb:
foo.invoke("hi", {"metadata": {"my_other_key": "my_other_value"}})
run = cb.traced_runs[0]
assert run.extra is not None
assert run.extra["metadata"] == expected_metadata
def test_tags_are_appended() -> None:
"""Test tags from with_config are concatenated with those in invocation."""
foo = RunnableLambda(lambda x: x).with_config({"tags": ["my_key"]})
with collect_runs() as cb:
foo.invoke("hi", {"tags": ["invoked_key"]})
run = cb.traced_runs[0]
assert isinstance(run.tags, list)
assert sorted(run.tags) == sorted(["my_key", "invoked_key"])
def test_bind_bind() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | llm = FakeListLLM(responses=["i'm a textbot"])
assert dumpd(
llm.bind(stop=["Thought:"], one="two").bind(
stop=["Observation:"], hello="world"
)
) == dumpd(llm.bind(stop=["Observation:"], one="two", hello="world"))
def test_deep_stream() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
llm = FakeStreamingListLLM(responses=["foo-lish"])
chain = prompt | llm | StrOutputParser()
stream = chain.stream({"question": "What up"})
chunks = []
for chunk in stream:
chunks.append(chunk)
assert len(chunks) == len("foo-lish")
assert "".join(chunks) == "foo-lish"
chunks = []
for chunk in (chain | RunnablePassthrough()).stream({"question": "What up"}):
chunks.append(chunk)
assert len(chunks) == len("foo-lish")
assert "".join(chunks) == "foo-lish"
def test_deep_stream_assign() -> None:
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
llm = FakeStreamingListLLM(responses=["foo-lish"])
chain: Runnable = prompt | llm | {"str": StrOutputParser()} |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | stream = chain.stream({"question": "What up"})
chunks = []
for chunk in stream:
chunks.append(chunk)
assert len(chunks) == len("foo-lish")
assert add(chunks) == {"str": "foo-lish"}
chain_with_assign = chain | RunnablePassthrough.assign(
hello=itemgetter("str") | llm
)
assert chain_with_assign.input_schema.schema() == {
"title": "PromptInput",
"type": "object",
"properties": {"question": {"title": "Question", "type": "string"}},
}
assert chain_with_assign.output_schema.schema() == {
"title": "RunnableAssignOutput",
"type": "object",
"properties": {
"str": {"title": "Str"},
"hello": {"title": "Hello", "type": "string"},
},
}
chunks = []
for chunk in chain_with_assign.stream({"question": "What up"}):
chunks.append(chunk)
assert len(chunks) == len("foo-lish") * 2
assert chunks == [
{"str": "f"},
{"str": "o"}, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | {"str": "o"},
{"str": "-"},
{"str": "l"},
{"str": "i"},
{"str": "s"},
{"str": "h"},
{"hello": "f"},
{"hello": "o"},
{"hello": "o"},
{"hello": "-"},
{"hello": "l"},
{"hello": "i"},
{"hello": "s"},
{"hello": "h"},
]
assert add(chunks) == {"str": "foo-lish", "hello": "foo-lish"}
assert chain_with_assign.invoke({"question": "What up"}) == {
"str": "foo-lish",
"hello": "foo-lish",
}
chain_with_assign_shadow = chain | RunnablePassthrough.assign(
str=lambda _: "shadow",
hello=itemgetter("str") | llm,
)
assert chain_with_assign_shadow.input_schema.schema() == {
"title": "PromptInput",
"type": "object",
"properties": {"question": {"title": "Question", "type": "string"}},
} |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | assert chain_with_assign_shadow.output_schema.schema() == {
"title": "RunnableAssignOutput",
"type": "object",
"properties": {
"str": {"title": "Str"},
"hello": {"title": "Hello", "type": "string"},
},
}
chunks = []
for chunk in chain_with_assign_shadow.stream({"question": "What up"}):
chunks.append(chunk)
assert len(chunks) == len("foo-lish") + 1
assert add(chunks) == {"str": "shadow", "hello": "foo-lish"}
assert chain_with_assign_shadow.invoke({"question": "What up"}) == {
"str": "shadow",
"hello": "foo-lish",
}
async def test_deep_astream() -> None:
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
llm = FakeStreamingListLLM(responses=["foo-lish"])
chain = prompt | llm | StrOutputParser()
stream = chain.astream({"question": "What up"})
chunks = []
async for chunk in stream:
chunks.append(chunk)
assert len(chunks) == len("foo-lish")
assert "".join(chunks) == "foo-lish" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | chunks = []
async for chunk in (chain | RunnablePassthrough()).astream({"question": "What up"}):
chunks.append(chunk)
assert len(chunks) == len("foo-lish")
assert "".join(chunks) == "foo-lish"
async def test_deep_astream_assign() -> None:
prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
llm = FakeStreamingListLLM(responses=["foo-lish"])
chain: Runnable = prompt | llm | {"str": StrOutputParser()}
stream = chain.astream({"question": "What up"})
chunks = []
async for chunk in stream:
chunks.append(chunk)
assert len(chunks) == len("foo-lish")
assert add(chunks) == {"str": "foo-lish"}
chain_with_assign = chain | RunnablePassthrough.assign(
hello=itemgetter("str") | llm,
)
assert chain_with_assign.input_schema.schema() == {
"title": "PromptInput",
"type": "object",
"properties": {"question": {"title": "Question", "type": "string"}},
}
assert chain_with_assign.output_schema.schema() == {
"title": "RunnableAssignOutput",
"type": "object",
"properties": { |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | "str": {"title": "Str"},
"hello": {"title": "Hello", "type": "string"},
},
}
chunks = []
async for chunk in chain_with_assign.astream({"question": "What up"}):
chunks.append(chunk)
assert len(chunks) == len("foo-lish") * 2
assert chunks == [
{"str": "f"},
{"str": "o"},
{"str": "o"},
{"str": "-"},
{"str": "l"},
{"str": "i"},
{"str": "s"},
{"str": "h"},
{"hello": "f"},
{"hello": "o"},
{"hello": "o"},
{"hello": "-"},
{"hello": "l"},
{"hello": "i"},
{"hello": "s"},
{"hello": "h"},
]
assert add(chunks) == {"str": "foo-lish", "hello": "foo-lish"}
assert await chain_with_assign.ainvoke({"question": "What up"}) == { |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | "str": "foo-lish",
"hello": "foo-lish",
}
chain_with_assign_shadow = chain | RunnablePassthrough.assign(
str=lambda _: "shadow",
hello=itemgetter("str") | llm,
)
assert chain_with_assign_shadow.input_schema.schema() == {
"title": "PromptInput",
"type": "object",
"properties": {"question": {"title": "Question", "type": "string"}},
}
assert chain_with_assign_shadow.output_schema.schema() == {
"title": "RunnableAssignOutput",
"type": "object",
"properties": {
"str": {"title": "Str"},
"hello": {"title": "Hello", "type": "string"},
},
}
chunks = []
async for chunk in chain_with_assign_shadow.astream({"question": "What up"}):
chunks.append(chunk)
assert len(chunks) == len("foo-lish") + 1
assert add(chunks) == {"str": "shadow", "hello": "foo-lish"}
assert await chain_with_assign_shadow.ainvoke({"question": "What up"}) == {
"str": "shadow",
"hello": "foo-lish",
}
def test_runnable_sequence_transform() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | llm = FakeStreamingListLLM(responses=["foo-lish"])
chain: Runnable = llm | StrOutputParser()
stream = chain.transform(llm.stream("Hi there!"))
chunks = []
for chunk in stream:
chunks.append(chunk)
assert len(chunks) == len("foo-lish")
assert "".join(chunks) == "foo-lish"
async def test_runnable_sequence_atransform() -> None:
llm = FakeStreamingListLLM(responses=["foo-lish"])
chain: Runnable = llm | StrOutputParser()
stream = chain.atransform(llm.astream("Hi there!"))
chunks = []
async for chunk in stream:
chunks.append(chunk)
assert len(chunks) == len("foo-lish")
assert "".join(chunks) == "foo-lish"
@pytest.fixture()
def llm_with_fallbacks() -> RunnableWithFallbacks:
error_llm = FakeListLLM(responses=["foo"], i=1)
pass_llm = FakeListLLM(responses=["bar"])
return error_llm.with_fallbacks([pass_llm])
@pytest.fixture()
def llm_with_multi_fallbacks() -> RunnableWithFallbacks: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | error_llm = FakeListLLM(responses=["foo"], i=1)
error_llm_2 = FakeListLLM(responses=["baz"], i=1)
pass_llm = FakeListLLM(responses=["bar"])
return error_llm.with_fallbacks([error_llm_2, pass_llm])
@pytest.fixture()
def llm_chain_with_fallbacks() -> Runnable:
error_llm = FakeListLLM(responses=["foo"], i=1)
pass_llm = FakeListLLM(responses=["bar"])
prompt = PromptTemplate.from_template("what did baz say to {buz}")
return RunnableParallel({"buz": lambda x: x}) | (prompt | error_llm).with_fallbacks(
[prompt | pass_llm]
)
@pytest.mark.parametrize(
"runnable",
["llm_with_fallbacks", "llm_with_multi_fallbacks", "llm_chain_with_fallbacks"],
)
async def test_llm_with_fallbacks(
runnable: RunnableWithFallbacks, request: Any, snapshot: SnapshotAssertion
) -> None:
runnable = request.getfixturevalue(runnable)
assert runnable.invoke("hello") == "bar"
assert runnable.batch(["hi", "hey", "bye"]) == ["bar"] * 3
assert list(runnable.stream("hello")) == ["bar"]
assert await runnable.ainvoke("hello") == "bar"
assert await runnable.abatch(["hi", "hey", "bye"]) == ["bar"] * 3
assert list(await runnable.ainvoke("hello")) == list("bar")
if sys.version_info >= (3, 9):
assert dumps(runnable, pretty=True) == snapshot
class FakeSplitIntoListParser(BaseOutputParser[List[str]]): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | """Parse the output of an LLM call to a comma-separated list."""
@classmethod
def is_lc_serializable(cls) -> bool:
"""Return whether or not the class is serializable."""
return True
def get_format_instructions(self) -> str:
return (
"Your response should be a list of comma separated values, "
"eg: `foo, bar, baz`"
)
def parse(self, text: str) -> List[str]:
"""Parse the output of an LLM call."""
return text.strip().split(", ")
def test_each_simple() -> None:
"""Test that each() works with a simple runnable."""
parser = FakeSplitIntoListParser()
assert parser.invoke("first item, second item") == ["first item", "second item"]
assert parser.map().invoke(["a, b", "c"]) == [["a", "b"], ["c"]]
assert parser.map().map().invoke([["a, b", "c"], ["c, e"]]) == [
[["a", "b"], ["c"]],
[["c", "e"]],
]
def test_each(snapshot: SnapshotAssertion) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
first_llm = FakeStreamingListLLM(responses=["first item, second item, third item"])
parser = FakeSplitIntoListParser()
second_llm = FakeStreamingListLLM(responses=["this", "is", "a", "test"])
chain = prompt | first_llm | parser | second_llm.map()
assert dumps(chain, pretty=True) == snapshot
output = chain.invoke({"question": "What up"})
assert output == ["this", "is", "a"]
assert (parser | second_llm.map()).invoke("first item, second item") == [
"test",
"this",
]
def test_recursive_lambda() -> None:
def _simple_recursion(x: int) -> Union[int, Runnable]:
if x < 10:
return RunnableLambda(lambda *args: _simple_recursion(x + 1))
else:
return x
runnable = RunnableLambda(_simple_recursion)
assert runnable.invoke(5) == 10
with pytest.raises(RecursionError):
runnable.invoke(0, {"recursion_limit": 9})
def test_retrying(mocker: MockerFixture) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | def _lambda(x: int) -> Union[int, Runnable]:
if x == 1:
raise ValueError("x is 1")
elif x == 2:
raise RuntimeError("x is 2")
else:
return x
_lambda_mock = mocker.Mock(side_effect=_lambda)
runnable = RunnableLambda(_lambda_mock)
with pytest.raises(ValueError):
runnable.invoke(1)
assert _lambda_mock.call_count == 1
_lambda_mock.reset_mock()
with pytest.raises(ValueError):
runnable.with_retry(
stop_after_attempt=2,
retry_if_exception_type=(ValueError,),
).invoke(1)
assert _lambda_mock.call_count == 2
_lambda_mock.reset_mock()
with pytest.raises(RuntimeError):
runnable.with_retry(
stop_after_attempt=2,
wait_exponential_jitter=False,
retry_if_exception_type=(ValueError,),
).invoke(2)
assert _lambda_mock.call_count == 1
_lambda_mock.reset_mock()
with pytest.raises(ValueError):
runnable.with_retry( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | stop_after_attempt=2,
wait_exponential_jitter=False,
retry_if_exception_type=(ValueError,),
).batch([1, 2, 0])
assert _lambda_mock.call_count == 3 + 2
_lambda_mock.reset_mock()
output = runnable.with_retry(
stop_after_attempt=2,
wait_exponential_jitter=False,
retry_if_exception_type=(ValueError,),
).batch([1, 2, 0], return_exceptions=True)
assert _lambda_mock.call_count == 3 + 2
assert len(output) == 3
assert isinstance(output[0], ValueError)
assert isinstance(output[1], RuntimeError)
assert output[2] == 0
_lambda_mock.reset_mock()
async def test_async_retrying(mocker: MockerFixture) -> None:
def _lambda(x: int) -> Union[int, Runnable]:
if x == 1:
raise ValueError("x is 1")
elif x == 2:
raise RuntimeError("x is 2")
else:
return x
_lambda_mock = mocker.Mock(side_effect=_lambda)
runnable = RunnableLambda(_lambda_mock)
with pytest.raises(ValueError): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | await runnable.ainvoke(1)
assert _lambda_mock.call_count == 1
_lambda_mock.reset_mock()
with pytest.raises(ValueError):
await runnable.with_retry(
stop_after_attempt=2,
wait_exponential_jitter=False,
retry_if_exception_type=(ValueError, KeyError),
).ainvoke(1)
assert _lambda_mock.call_count == 2
_lambda_mock.reset_mock()
with pytest.raises(RuntimeError):
await runnable.with_retry(
stop_after_attempt=2,
wait_exponential_jitter=False,
retry_if_exception_type=(ValueError,),
).ainvoke(2)
assert _lambda_mock.call_count == 1
_lambda_mock.reset_mock()
with pytest.raises(ValueError):
await runnable.with_retry(
stop_after_attempt=2,
wait_exponential_jitter=False,
retry_if_exception_type=(ValueError,),
).abatch([1, 2, 0])
assert _lambda_mock.call_count == 3 + 2
_lambda_mock.reset_mock()
output = await runnable.with_retry(
stop_after_attempt=2, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | wait_exponential_jitter=False,
retry_if_exception_type=(ValueError,),
).abatch([1, 2, 0], return_exceptions=True)
assert _lambda_mock.call_count == 3 + 2
assert len(output) == 3
assert isinstance(output[0], ValueError)
assert isinstance(output[1], RuntimeError)
assert output[2] == 0
_lambda_mock.reset_mock()
@freeze_time("2023-01-01")
def test_seq_batch_return_exceptions(mocker: MockerFixture) -> None:
class ControlledExceptionRunnable(Runnable[str, str]):
def __init__(self, fail_starts_with: str) -> None:
self.fail_starts_with = fail_starts_with
def invoke(self, input: Any, config: Optional[RunnableConfig] = None) -> Any:
raise NotImplementedError()
def _batch(
self,
inputs: List[str],
) -> List:
outputs: List[Any] = []
for input in inputs:
if input.startswith(self.fail_starts_with):
outputs.append(ValueError())
else:
outputs.append(input + "a")
return outputs
def batch(
self, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | inputs: List[str],
config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None,
*,
return_exceptions: bool = False,
**kwargs: Any,
) -> List[str]:
return self._batch_with_config(
self._batch,
inputs,
config,
return_exceptions=return_exceptions,
**kwargs,
)
chain = (
ControlledExceptionRunnable("bux")
| ControlledExceptionRunnable("bar")
| ControlledExceptionRunnable("baz")
| ControlledExceptionRunnable("foo")
)
assert isinstance(chain, RunnableSequence)
with pytest.raises(ValueError):
chain.batch(["foo", "bar", "baz", "qux"])
spy = mocker.spy(ControlledExceptionRunnable, "batch")
tracer = FakeTracer()
inputs = ["foo", "bar", "baz", "qux"]
outputs = chain.batch(inputs, dict(callbacks=[tracer]), return_exceptions=True)
assert len(outputs) == 4
assert isinstance(outputs[0], ValueError)
assert isinstance(outputs[1], ValueError) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | assert isinstance(outputs[2], ValueError)
assert outputs[3] == "quxaaaa"
assert spy.call_count == 4
inputs_to_batch = [c[0][1] for c in spy.call_args_list]
assert inputs_to_batch == [
["foo", "bar", "baz", "qux"],
["fooa", "bara", "baza", "quxa"],
["fooaa", "bazaa", "quxaa"],
["fooaaa", "quxaaa"],
]
parent_runs = sorted(
(r for r in tracer.runs if r.parent_run_id is None),
key=lambda run: inputs.index(run.inputs["input"]),
)
assert len(parent_runs) == 4
parent_run_foo = parent_runs[0]
assert parent_run_foo.inputs["input"] == "foo"
assert parent_run_foo.error == repr(ValueError())
assert len(parent_run_foo.child_runs) == 4
assert [r.error for r in parent_run_foo.child_runs] == [
None,
None, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | None,
repr(ValueError()),
]
parent_run_bar = parent_runs[1]
assert parent_run_bar.inputs["input"] == "bar"
assert parent_run_bar.error == repr(ValueError())
assert len(parent_run_bar.child_runs) == 2
assert [r.error for r in parent_run_bar.child_runs] == [
None,
repr(ValueError()),
]
parent_run_baz = parent_runs[2]
assert parent_run_baz.inputs["input"] == "baz"
assert parent_run_baz.error == repr(ValueError())
assert len(parent_run_baz.child_runs) == 3
assert [r.error for r in parent_run_baz.child_runs] == [
None,
None,
repr(ValueError()),
]
parent_run_qux = parent_runs[3]
assert parent_run_qux.inputs["input"] == "qux"
assert parent_run_qux.error is None
assert parent_run_qux.outputs is not None
assert parent_run_qux.outputs["output"] == "quxaaaa"
assert len(parent_run_qux.child_runs) == 4
assert [r.error for r in parent_run_qux.child_runs] == [None, None, None, None]
@freeze_time("2023-01-01")
async def test_seq_abatch_return_exceptions(mocker: MockerFixture) -> None:
class ControlledExceptionRunnable(Runnable[str, str]): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | def __init__(self, fail_starts_with: str) -> None:
self.fail_starts_with = fail_starts_with
def invoke(self, input: Any, config: Optional[RunnableConfig] = None) -> Any:
raise NotImplementedError()
async def _abatch(
self,
inputs: List[str],
) -> List:
outputs: List[Any] = []
for input in inputs:
if input.startswith(self.fail_starts_with):
outputs.append(ValueError())
else:
outputs.append(input + "a")
return outputs
async def abatch( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | self,
inputs: List[str],
config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None,
*,
return_exceptions: bool = False,
**kwargs: Any,
) -> List[str]:
return await self._abatch_with_config(
self._abatch,
inputs,
config,
return_exceptions=return_exceptions,
**kwargs,
)
chain = (
ControlledExceptionRunnable("bux")
| ControlledExceptionRunnable("bar")
| ControlledExceptionRunnable("baz")
| ControlledExceptionRunnable("foo")
)
assert isinstance(chain, RunnableSequence)
with pytest.raises(ValueError):
await chain.abatch(["foo", "bar", "baz", "qux"])
spy = mocker.spy(ControlledExceptionRunnable, "abatch")
tracer = FakeTracer()
inputs = ["foo", "bar", "baz", "qux"]
outputs = await chain.abatch(
inputs, dict(callbacks=[tracer]), return_exceptions=True
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | assert len(outputs) == 4
assert isinstance(outputs[0], ValueError)
assert isinstance(outputs[1], ValueError)
assert isinstance(outputs[2], ValueError)
assert outputs[3] == "quxaaaa"
assert spy.call_count == 4
inputs_to_batch = [c[0][1] for c in spy.call_args_list]
assert inputs_to_batch == [
["foo", "bar", "baz", "qux"],
["fooa", "bara", "baza", "quxa"],
["fooaa", "bazaa", "quxaa"],
["fooaaa", "quxaaa"],
]
parent_runs = sorted(
(r for r in tracer.runs if r.parent_run_id is None),
key=lambda run: inputs.index(run.inputs["input"]),
)
assert len(parent_runs) == 4
parent_run_foo = parent_runs[0]
assert parent_run_foo.inputs["input"] == "foo"
assert parent_run_foo.error == repr(ValueError())
assert len(parent_run_foo.child_runs) == 4 |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | assert [r.error for r in parent_run_foo.child_runs] == [
None,
None,
None,
repr(ValueError()),
]
parent_run_bar = parent_runs[1]
assert parent_run_bar.inputs["input"] == "bar"
assert parent_run_bar.error == repr(ValueError())
assert len(parent_run_bar.child_runs) == 2
assert [r.error for r in parent_run_bar.child_runs] == [
None,
repr(ValueError()),
]
parent_run_baz = parent_runs[2]
assert parent_run_baz.inputs["input"] == "baz"
assert parent_run_baz.error == repr(ValueError())
assert len(parent_run_baz.child_runs) == 3
assert [r.error for r in parent_run_baz.child_runs] == [
None,
None,
repr(ValueError()),
]
parent_run_qux = parent_runs[3]
assert parent_run_qux.inputs["input"] == "qux"
assert parent_run_qux.error is None
assert parent_run_qux.outputs is not None
assert parent_run_qux.outputs["output"] == "quxaaaa"
assert len(parent_run_qux.child_runs) == 4
assert [r.error for r in parent_run_qux.child_runs] == [None, None, None, None] |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | def test_runnable_branch_init() -> None:
"""Verify that runnable branch gets initialized properly."""
add = RunnableLambda(lambda x: x + 1)
condition = RunnableLambda(lambda x: x > 0)
with pytest.raises(ValueError):
RunnableBranch((condition, add))
with pytest.raises(ValueError):
RunnableBranch(condition)
@pytest.mark.parametrize(
"branches",
[
[
(RunnableLambda(lambda x: x > 0), RunnableLambda(lambda x: x + 1)),
RunnableLambda(lambda x: x - 1),
],
[
(RunnableLambda(lambda x: x > 0), RunnableLambda(lambda x: x + 1)),
(RunnableLambda(lambda x: x > 5), RunnableLambda(lambda x: x + 1)),
RunnableLambda(lambda x: x - 1),
],
[
(lambda x: x > 0, lambda x: x + 1),
(lambda x: x > 5, lambda x: x + 1),
lambda x: x - 1,
],
],
)
def test_runnable_branch_init_coercion(branches: Sequence[Any]) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | """Verify that runnable branch gets initialized properly."""
runnable = RunnableBranch[int, int](*branches)
for branch in runnable.branches:
condition, body = branch
assert isinstance(condition, Runnable)
assert isinstance(body, Runnable)
assert isinstance(runnable.default, Runnable)
assert runnable.input_schema.schema() == {"title": "RunnableBranchInput"}
def test_runnable_branch_invoke_call_counts(mocker: MockerFixture) -> None:
"""Verify that runnables are invoked only when necessary.""" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | add = RunnableLambda(lambda x: x + 1)
sub = RunnableLambda(lambda x: x - 1)
condition = RunnableLambda(lambda x: x > 0)
spy = mocker.spy(condition, "invoke")
add_spy = mocker.spy(add, "invoke")
branch = RunnableBranch[int, int]((condition, add), (condition, add), sub)
assert spy.call_count == 0
assert add_spy.call_count == 0
assert branch.invoke(1) == 2
assert add_spy.call_count == 1
assert spy.call_count == 1
assert branch.invoke(2) == 3
assert spy.call_count == 2
assert add_spy.call_count == 2
assert branch.invoke(-3) == -4
assert spy.call_count == 4
assert add_spy.call_count == 2
def test_runnable_branch_invoke() -> None:
def raise_value_error(x: int) -> int:
"""Raise a value error."""
raise ValueError("x is too large")
branch = RunnableBranch[int, int](
(lambda x: x > 100, raise_value_error),
(lambda x: x > 0 and x < 5, lambda x: x + 1),
(lambda x: x > 5, lambda x: x * 10),
lambda x: x - 1, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | )
assert branch.invoke(1) == 2
assert branch.invoke(10) == 100
assert branch.invoke(0) == -1
with pytest.raises(ValueError):
branch.invoke(1000)
def test_runnable_branch_batch() -> None:
"""Test batch variant."""
branch = RunnableBranch[int, int](
(lambda x: x > 0 and x < 5, lambda x: x + 1),
(lambda x: x > 5, lambda x: x * 10),
lambda x: x - 1,
)
assert branch.batch([1, 10, 0]) == [2, 100, -1]
async def test_runnable_branch_ainvoke() -> None:
"""Test async variant of invoke."""
branch = RunnableBranch[int, int](
(lambda x: x > 0 and x < 5, lambda x: x + 1),
(lambda x: x > 5, lambda x: x * 10),
lambda x: x - 1,
)
assert await branch.ainvoke(1) == 2
assert await branch.ainvoke(10) == 100
assert await branch.ainvoke(0) == -1
async def condition(x: int) -> bool:
return x > 0
async def add(x: int) -> int: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | return x + 1
async def sub(x: int) -> int:
return x - 1
branch = RunnableBranch[int, int]((condition, add), sub)
assert await branch.ainvoke(1) == 2
assert await branch.ainvoke(-10) == -11
def test_runnable_branch_invoke_callbacks() -> None:
"""Verify that callbacks are correctly used in invoke."""
tracer = FakeTracer()
def raise_value_error(x: int) -> int:
"""Raise a value error."""
raise ValueError("x is too large")
branch = RunnableBranch[int, int](
(lambda x: x > 100, raise_value_error),
lambda x: x - 1,
)
assert branch.invoke(1, config={"callbacks": [tracer]}) == 0
assert len(tracer.runs) == 1
assert tracer.runs[0].error is None
assert tracer.runs[0].outputs == {"output": 0}
with pytest.raises(ValueError):
branch.invoke(1000, config={"callbacks": [tracer]})
assert len(tracer.runs) == 2
assert tracer.runs[1].error == "ValueError('x is too large')"
assert tracer.runs[1].outputs is None
async def test_runnable_branch_ainvoke_callbacks() -> None:
"""Verify that callbacks are invoked correctly in ainvoke."""
tracer = FakeTracer()
async def raise_value_error(x: int) -> int: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | """Raise a value error."""
raise ValueError("x is too large")
branch = RunnableBranch[int, int](
(lambda x: x > 100, raise_value_error),
lambda x: x - 1,
)
assert await branch.ainvoke(1, config={"callbacks": [tracer]}) == 0
assert len(tracer.runs) == 1
assert tracer.runs[0].error is None
assert tracer.runs[0].outputs == {"output": 0}
with pytest.raises(ValueError):
await branch.ainvoke(1000, config={"callbacks": [tracer]})
assert len(tracer.runs) == 2
assert tracer.runs[1].error == "ValueError('x is too large')"
assert tracer.runs[1].outputs is None
async def test_runnable_branch_abatch() -> None:
"""Test async variant of invoke."""
branch = RunnableBranch[int, int](
(lambda x: x > 0 and x < 5, lambda x: x + 1),
(lambda x: x > 5, lambda x: x * 10),
lambda x: x - 1,
) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | assert await branch.abatch([1, 10, 0]) == [2, 100, -1]
@pytest.mark.skipif(
sys.version_info < (3, 9), reason="Requires python version >= 3.9 to run."
)
def test_representation_of_runnables() -> None:
"""Test representation of runnables."""
runnable = RunnableLambda(lambda x: x * 2)
assert repr(runnable) == "RunnableLambda(lambda x: x * 2)"
def f(x: int) -> int:
"""Return 2."""
return 2
assert repr(RunnableLambda(func=f)) == "RunnableLambda(...)"
async def af(x: int) -> int:
"""Return 2."""
return 2
assert repr(RunnableLambda(func=f, afunc=af)) == "RunnableLambda(...)"
assert repr(
RunnableLambda(lambda x: x + 2)
| {
"a": RunnableLambda(lambda x: x * 2),
"b": RunnableLambda(lambda x: x * 3),
}
) == (
"RunnableLambda(...)\n"
"| {\n"
" a: RunnableLambda(...),\n"
" b: RunnableLambda(...)\n"
" }"
), "repr where code string contains multiple lambdas gives up"
async def test_tool_from_runnable() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | prompt = (
SystemMessagePromptTemplate.from_template("You are a nice assistant.")
+ "{question}"
)
llm = FakeStreamingListLLM(responses=["foo-lish"])
chain = prompt | llm | StrOutputParser()
chain_tool = tool("chain_tool", chain)
assert isinstance(chain_tool, BaseTool)
assert chain_tool.name == "chain_tool"
assert chain_tool.run({"question": "What up"}) == chain.invoke(
{"question": "What up"}
)
assert await chain_tool.arun({"question": "What up"}) == await chain.ainvoke(
{"question": "What up"}
)
assert chain_tool.description.endswith(repr(chain))
assert chain_tool.args_schema.schema() == chain.input_schema.schema()
assert chain_tool.args_schema.schema() == {
"properties": {"question": {"title": "Question", "type": "string"}},
"title": "PromptInput",
"type": "object",
}
async def test_runnable_gen() -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | """Test that a generator can be used as a runnable."""
def gen(input: Iterator[Any]) -> Iterator[int]:
yield 1
yield 2
yield 3
runnable = RunnableGenerator(gen)
assert runnable.input_schema.schema() == {"title": "RunnableGeneratorInput"}
assert runnable.output_schema.schema() == {
"title": "RunnableGeneratorOutput",
"type": "integer",
}
assert runnable.invoke(None) == 6
assert list(runnable.stream(None)) == [1, 2, 3]
assert runnable.batch([None, None]) == [6, 6]
async def agen(input: AsyncIterator[Any]) -> AsyncIterator[int]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | yield 1
yield 2
yield 3
arunnable = RunnableGenerator(agen)
assert await arunnable.ainvoke(None) == 6
assert [p async for p in arunnable.astream(None)] == [1, 2, 3]
assert await arunnable.abatch([None, None]) == [6, 6]
async def test_runnable_gen_transform() -> None:
"""Test that a generator can be used as a runnable."""
def gen_indexes(length_iter: Iterator[int]) -> Iterator[int]:
for i in range(next(length_iter)):
yield i
async def agen_indexes(length_iter: AsyncIterator[int]) -> AsyncIterator[int]:
async for length in length_iter:
for i in range(length):
yield i
def plus_one(input: Iterator[int]) -> Iterator[int]:
for i in input:
yield i + 1
async def aplus_one(input: AsyncIterator[int]) -> AsyncIterator[int]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,407 | RunnableLambda: returned runnable called synchronously when using ainvoke | ### System Info
langchain on master branch
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
from langchain.schema.runnable import RunnableLambda
import asyncio
def idchain_sync(__input):
print(f'sync chain call: {__input}')
return __input
async def idchain_async(__input):
print(f'async chain call: {__input}')
return __input
idchain = RunnableLambda(func=idchain_sync,afunc=idchain_async)
def func(__input):
return idchain
asyncio.run(RunnableLambda(func).ainvoke('toto'))
#printss 'sync chain call: toto' instead of 'async chain call: toto'
```
### Expected behavior
LCEL's route can cause chains to be silently run synchronously, while the user uses ainvoke...
When calling a RunnableLambda A returning a chain B with ainvoke, we would expect the new chain B to be called with ainvoke;
However, if the function provided to RunnableLambda A is not async, then the chain B will be called with invoke, silently causing all the rest of the chain to be called synchronously.
| https://github.com/langchain-ai/langchain/issues/13407 | https://github.com/langchain-ai/langchain/pull/13408 | 391f200eaab9af587eea902b264f2d3af929b268 | e17edc4d0b1dccc98df2a7d1db86bfcc0eb66ca5 | "2023-11-15T17:27:49Z" | python | "2023-11-28T11:18:26Z" | libs/core/tests/unit_tests/runnables/test_runnable.py | async for i in input:
yield i + 1
chain: Runnable = RunnableGenerator(gen_indexes, agen_indexes) | plus_one
achain = RunnableGenerator(gen_indexes, agen_indexes) | aplus_one
assert chain.input_schema.schema() == {
"title": "RunnableGeneratorInput",
"type": "integer",
}
assert chain.output_schema.schema() == {
"title": "RunnableGeneratorOutput",
"type": "integer",
}
assert achain.input_schema.schema() == {
"title": "RunnableGeneratorInput",
"type": "integer",
}
assert achain.output_schema.schema() == {
"title": "RunnableGeneratorOutput",
"type": "integer",
}
assert list(chain.stream(3)) == [1, 2, 3]
assert [p async for p in achain.astream(4)] == [1, 2, 3, 4]
def test_with_config_callbacks() -> None:
result = RunnableLambda(lambda x: x).with_config({"callbacks": []})
assert isinstance(result, RunnableBinding) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,326 | Enhancing iMessageChatLoader to prevent skipping messages with extractable content | ### Feature request
Implement extraction of message content whenever iMessageChatLoader skips messages with null `text` field even though the content is present in other fields.
### Motivation
Due to iMessage's database schema change in its new MacOS Ventura, newer messages now have content encoded in the `attributedBody` body, with the `text` field being null.
This issue was originally raised by @idvorkin in Issue #10680.
### Your contribution
We intend to submit a pull request for this issue close to the end of November. | https://github.com/langchain-ai/langchain/issues/13326 | https://github.com/langchain-ai/langchain/pull/13634 | e5256bcb69bde036cd02ca4871333451efc11833 | 32d794f5a3dac334d258f11a1b4b7d727a3549a4 | "2023-11-14T04:08:46Z" | python | "2023-11-28T20:45:43Z" | libs/langchain/langchain/chat_loaders/imessage.py | from __future__ import annotations
from pathlib import Path
from typing import TYPE_CHECKING, Iterator, List, Optional, Union
from langchain_core.chat_sessions import ChatSession
from langchain_core.messages import HumanMessage
from langchain.chat_loaders.base import BaseChatLoader
if TYPE_CHECKING:
import sqlite3
class IMessageChatLoader(BaseChatLoader):
"""Load chat sessions from the `iMessage` chat.db SQLite file.
It only works on macOS when you have iMessage enabled and have the chat.db file.
The chat.db file is likely located at ~/Library/Messages/chat.db. However, your
terminal may not have permission to access this file. To resolve this, you can
copy the file to a different location, change the permissions of the file, or
grant full disk access for your terminal emulator
in System Settings > Security and Privacy > Full Disk Access.
"""
def __init__(self, path: Optional[Union[str, Path]] = None): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,326 | Enhancing iMessageChatLoader to prevent skipping messages with extractable content | ### Feature request
Implement extraction of message content whenever iMessageChatLoader skips messages with null `text` field even though the content is present in other fields.
### Motivation
Due to iMessage's database schema change in its new MacOS Ventura, newer messages now have content encoded in the `attributedBody` body, with the `text` field being null.
This issue was originally raised by @idvorkin in Issue #10680.
### Your contribution
We intend to submit a pull request for this issue close to the end of November. | https://github.com/langchain-ai/langchain/issues/13326 | https://github.com/langchain-ai/langchain/pull/13634 | e5256bcb69bde036cd02ca4871333451efc11833 | 32d794f5a3dac334d258f11a1b4b7d727a3549a4 | "2023-11-14T04:08:46Z" | python | "2023-11-28T20:45:43Z" | libs/langchain/langchain/chat_loaders/imessage.py | """
Initialize the IMessageChatLoader.
Args:
path (str or Path, optional): Path to the chat.db SQLite file.
Defaults to None, in which case the default path
~/Library/Messages/chat.db will be used.
"""
if path is None:
path = Path.home() / "Library" / "Messages" / "chat.db"
self.db_path = path if isinstance(path, Path) else Path(path)
if not self.db_path.exists():
raise FileNotFoundError(f"File {self.db_path} not found")
try:
import sqlite3
except ImportError as e:
raise ImportError(
"The sqlite3 module is required to load iMessage chats.\n"
"Please install it with `pip install pysqlite3`"
) from e
def _load_single_chat_session(
self, cursor: "sqlite3.Cursor", chat_id: int
) -> ChatSession:
"""
Load a single chat session from the iMessage chat.db.
Args:
cursor: SQLite cursor object. |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,326 | Enhancing iMessageChatLoader to prevent skipping messages with extractable content | ### Feature request
Implement extraction of message content whenever iMessageChatLoader skips messages with null `text` field even though the content is present in other fields.
### Motivation
Due to iMessage's database schema change in its new MacOS Ventura, newer messages now have content encoded in the `attributedBody` body, with the `text` field being null.
This issue was originally raised by @idvorkin in Issue #10680.
### Your contribution
We intend to submit a pull request for this issue close to the end of November. | https://github.com/langchain-ai/langchain/issues/13326 | https://github.com/langchain-ai/langchain/pull/13634 | e5256bcb69bde036cd02ca4871333451efc11833 | 32d794f5a3dac334d258f11a1b4b7d727a3549a4 | "2023-11-14T04:08:46Z" | python | "2023-11-28T20:45:43Z" | libs/langchain/langchain/chat_loaders/imessage.py | chat_id (int): ID of the chat session to load.
Returns:
ChatSession: Loaded chat session.
"""
results: List[HumanMessage] = []
query = """
SELECT message.date, handle.id, message.text
FROM message
JOIN chat_message_join ON message.ROWID = chat_message_join.message_id
JOIN handle ON message.handle_id = handle.ROWID
WHERE chat_message_join.chat_id = ?
ORDER BY message.date ASC;
"""
cursor.execute(query, (chat_id,))
messages = cursor.fetchall()
for date, sender, text in messages:
if text:
results.append(
HumanMessage(
role=sender,
content=text,
additional_kwargs={
"message_time": date,
"sender": sender,
},
)
)
return ChatSession(messages=results)
def lazy_load(self) -> Iterator[ChatSession]:
""" |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 13,326 | Enhancing iMessageChatLoader to prevent skipping messages with extractable content | ### Feature request
Implement extraction of message content whenever iMessageChatLoader skips messages with null `text` field even though the content is present in other fields.
### Motivation
Due to iMessage's database schema change in its new MacOS Ventura, newer messages now have content encoded in the `attributedBody` body, with the `text` field being null.
This issue was originally raised by @idvorkin in Issue #10680.
### Your contribution
We intend to submit a pull request for this issue close to the end of November. | https://github.com/langchain-ai/langchain/issues/13326 | https://github.com/langchain-ai/langchain/pull/13634 | e5256bcb69bde036cd02ca4871333451efc11833 | 32d794f5a3dac334d258f11a1b4b7d727a3549a4 | "2023-11-14T04:08:46Z" | python | "2023-11-28T20:45:43Z" | libs/langchain/langchain/chat_loaders/imessage.py | Lazy load the chat sessions from the iMessage chat.db
and yield them in the required format.
Yields:
ChatSession: Loaded chat session.
"""
import sqlite3
try:
conn = sqlite3.connect(self.db_path)
except sqlite3.OperationalError as e:
raise ValueError(
f"Could not open iMessage DB file {self.db_path}.\n"
"Make sure your terminal emulator has disk access to this file.\n"
" You can either copy the DB file to an accessible location"
" or grant full disk access for your terminal emulator."
" You can grant full disk access for your terminal emulator"
" in System Settings > Security and Privacy > Full Disk Access."
) from e
cursor = conn.cursor()
query = """SELECT chat_id
FROM message
JOIN chat_message_join ON message.ROWID = chat_message_join.message_id
GROUP BY chat_id
ORDER BY MAX(date) DESC;"""
cursor.execute(query)
chat_ids = [row[0] for row in cursor.fetchall()]
for chat_id in chat_ids:
yield self._load_single_chat_session(cursor, chat_id)
conn.close() |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,674 | HuggingFace Data Loader fails when context is not str | ### System Info
langchain 0.0.285
python 3.11.4
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Try to load https://huggingface.co/datasets/hotpot_qa/viewer/fullwiki/validation
from langchain.document_loaders import HuggingFaceDatasetLoader
dataset_name = "hotpot_qa"
page_content_column = "context"
name = "fullwiki"
loader = HuggingFaceDatasetLoader(dataset_name, page_content_column, name)
docs = loader.load()
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
[/Users/deanchanter/Documents/GitHub/comma-chameleons/hello_doc_read.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/hello_doc_read.ipynb) Cell 1 line 8
[4](vscode-notebook-cell:/Users/deanchanter/Documents/GitHub/comma-chameleons/hello_doc_read.ipynb#Y125sZmlsZQ%3D%3D?line=3) name = "fullwiki"
[7](vscode-notebook-cell:/Users/deanchanter/Documents/GitHub/comma-chameleons/hello_doc_read.ipynb#Y125sZmlsZQ%3D%3D?line=6) loader = HuggingFaceDatasetLoader(dataset_name, page_content_column, name)
----> [8](vscode-notebook-cell:/Users/deanchanter/Documents/GitHub/comma-chameleons/hello_doc_read.ipynb#Y125sZmlsZQ%3D%3D?line=7) docs = loader.load()
File [~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:87](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:87), in HuggingFaceDatasetLoader.load(self)
85 def load(self) -> List[Document]:
86 """Load documents."""
---> 87 return list(self.lazy_load())
File [~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:76](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:76), in HuggingFaceDatasetLoader.lazy_load(self)
59 raise ImportError(
60 "Could not import datasets python package. "
61 "Please install it with `pip install datasets`."
62 )
64 dataset = load_dataset(
65 path=self.path,
66 name=self.name,
(...)
73 num_proc=self.num_proc,
74 )
---> 76 yield from (
77 Document(
78 page_content=row.pop(self.page_content_column),
79 metadata=row,
80 )
81 for key in dataset.keys()
82 for row in dataset[key]
83 )
File [~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:77](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:77), in <genexpr>(.0)
59 raise ImportError(
60 "Could not import datasets python package. "
61 "Please install it with `pip install datasets`."
62 )
64 dataset = load_dataset(
65 path=self.path,
66 name=self.name,
(...)
73 num_proc=self.num_proc,
74 )
76 yield from (
---> 77 Document(
78 page_content=row.pop(self.page_content_column),
79 metadata=row,
80 )
81 for key in dataset.keys()
82 for row in dataset[key]
83 )
File [~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/load/serializable.py:75](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/load/serializable.py:75), in Serializable.__init__(self, **kwargs)
74 def __init__(self, **kwargs: Any) -> None:
---> 75 super().__init__(**kwargs)
76 self._lc_kwargs = kwargs
File [~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/pydantic/main.py:341](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/pydantic/main.py:341), in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for Document
page_content
str type expected (type=type_error.str)
### Expected behavior
Either extend class to handle more types of data or update docs. Will to do a PR to extend if your open. | https://github.com/langchain-ai/langchain/issues/10674 | https://github.com/langchain-ai/langchain/pull/13864 | 981f78f920d4e0514b7e627d9f3266afcccc9859 | 750485eaa8d6ca364f00454df4602abe2a8c9ba1 | "2023-09-16T10:49:37Z" | python | "2023-11-29T03:33:16Z" | libs/langchain/langchain/document_loaders/hugging_face_dataset.py | from typing import Iterator, List, Mapping, Optional, Sequence, Union
from langchain_core.documents import Document
from langchain.document_loaders.base import BaseLoader
class HuggingFaceDatasetLoader(BaseLoader):
"""Load from `Hugging Face Hub` datasets."""
def __init__(
self,
path: str,
page_content_column: str = "text",
name: Optional[str] = None,
data_dir: Optional[str] = None,
data_files: Optional[
Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]
] = None,
cache_dir: Optional[str] = None,
keep_in_memory: Optional[bool] = None,
save_infos: bool = False,
use_auth_token: Optional[Union[bool, str]] = None,
num_proc: Optional[int] = None, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,674 | HuggingFace Data Loader fails when context is not str | ### System Info
langchain 0.0.285
python 3.11.4
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Try to load https://huggingface.co/datasets/hotpot_qa/viewer/fullwiki/validation
from langchain.document_loaders import HuggingFaceDatasetLoader
dataset_name = "hotpot_qa"
page_content_column = "context"
name = "fullwiki"
loader = HuggingFaceDatasetLoader(dataset_name, page_content_column, name)
docs = loader.load()
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
[/Users/deanchanter/Documents/GitHub/comma-chameleons/hello_doc_read.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/hello_doc_read.ipynb) Cell 1 line 8
[4](vscode-notebook-cell:/Users/deanchanter/Documents/GitHub/comma-chameleons/hello_doc_read.ipynb#Y125sZmlsZQ%3D%3D?line=3) name = "fullwiki"
[7](vscode-notebook-cell:/Users/deanchanter/Documents/GitHub/comma-chameleons/hello_doc_read.ipynb#Y125sZmlsZQ%3D%3D?line=6) loader = HuggingFaceDatasetLoader(dataset_name, page_content_column, name)
----> [8](vscode-notebook-cell:/Users/deanchanter/Documents/GitHub/comma-chameleons/hello_doc_read.ipynb#Y125sZmlsZQ%3D%3D?line=7) docs = loader.load()
File [~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:87](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:87), in HuggingFaceDatasetLoader.load(self)
85 def load(self) -> List[Document]:
86 """Load documents."""
---> 87 return list(self.lazy_load())
File [~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:76](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:76), in HuggingFaceDatasetLoader.lazy_load(self)
59 raise ImportError(
60 "Could not import datasets python package. "
61 "Please install it with `pip install datasets`."
62 )
64 dataset = load_dataset(
65 path=self.path,
66 name=self.name,
(...)
73 num_proc=self.num_proc,
74 )
---> 76 yield from (
77 Document(
78 page_content=row.pop(self.page_content_column),
79 metadata=row,
80 )
81 for key in dataset.keys()
82 for row in dataset[key]
83 )
File [~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:77](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:77), in <genexpr>(.0)
59 raise ImportError(
60 "Could not import datasets python package. "
61 "Please install it with `pip install datasets`."
62 )
64 dataset = load_dataset(
65 path=self.path,
66 name=self.name,
(...)
73 num_proc=self.num_proc,
74 )
76 yield from (
---> 77 Document(
78 page_content=row.pop(self.page_content_column),
79 metadata=row,
80 )
81 for key in dataset.keys()
82 for row in dataset[key]
83 )
File [~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/load/serializable.py:75](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/load/serializable.py:75), in Serializable.__init__(self, **kwargs)
74 def __init__(self, **kwargs: Any) -> None:
---> 75 super().__init__(**kwargs)
76 self._lc_kwargs = kwargs
File [~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/pydantic/main.py:341](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/pydantic/main.py:341), in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for Document
page_content
str type expected (type=type_error.str)
### Expected behavior
Either extend class to handle more types of data or update docs. Will to do a PR to extend if your open. | https://github.com/langchain-ai/langchain/issues/10674 | https://github.com/langchain-ai/langchain/pull/13864 | 981f78f920d4e0514b7e627d9f3266afcccc9859 | 750485eaa8d6ca364f00454df4602abe2a8c9ba1 | "2023-09-16T10:49:37Z" | python | "2023-11-29T03:33:16Z" | libs/langchain/langchain/document_loaders/hugging_face_dataset.py | ):
"""Initialize the HuggingFaceDatasetLoader.
Args:
path: Path or name of the dataset.
page_content_column: Page content column name. Default is "text".
Note: Currently the function assumes the content is a string.
If it is not download the dataset using huggingface library and convert
using the json or pandas loaders.
https://github.com/langchain-ai/langchain/issues/10674
name: Name of the dataset configuration.
data_dir: Data directory of the dataset configuration.
data_files: Path(s) to source data file(s).
cache_dir: Directory to read/write data.
keep_in_memory: Whether to copy the dataset in-memory.
save_infos: Save the dataset information (checksums/size/splits/...).
Default is False.
use_auth_token: Bearer token for remote files on the Dataset Hub.
num_proc: Number of processes.
"""
self.path = path
self.page_content_column = page_content_column
self.name = name
self.data_dir = data_dir
self.data_files = data_files
self.cache_dir = cache_dir
self.keep_in_memory = keep_in_memory
self.save_infos = save_infos
self.use_auth_token = use_auth_token
self.num_proc = num_proc
def lazy_load( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,674 | HuggingFace Data Loader fails when context is not str | ### System Info
langchain 0.0.285
python 3.11.4
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Try to load https://huggingface.co/datasets/hotpot_qa/viewer/fullwiki/validation
from langchain.document_loaders import HuggingFaceDatasetLoader
dataset_name = "hotpot_qa"
page_content_column = "context"
name = "fullwiki"
loader = HuggingFaceDatasetLoader(dataset_name, page_content_column, name)
docs = loader.load()
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
[/Users/deanchanter/Documents/GitHub/comma-chameleons/hello_doc_read.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/hello_doc_read.ipynb) Cell 1 line 8
[4](vscode-notebook-cell:/Users/deanchanter/Documents/GitHub/comma-chameleons/hello_doc_read.ipynb#Y125sZmlsZQ%3D%3D?line=3) name = "fullwiki"
[7](vscode-notebook-cell:/Users/deanchanter/Documents/GitHub/comma-chameleons/hello_doc_read.ipynb#Y125sZmlsZQ%3D%3D?line=6) loader = HuggingFaceDatasetLoader(dataset_name, page_content_column, name)
----> [8](vscode-notebook-cell:/Users/deanchanter/Documents/GitHub/comma-chameleons/hello_doc_read.ipynb#Y125sZmlsZQ%3D%3D?line=7) docs = loader.load()
File [~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:87](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:87), in HuggingFaceDatasetLoader.load(self)
85 def load(self) -> List[Document]:
86 """Load documents."""
---> 87 return list(self.lazy_load())
File [~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:76](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:76), in HuggingFaceDatasetLoader.lazy_load(self)
59 raise ImportError(
60 "Could not import datasets python package. "
61 "Please install it with `pip install datasets`."
62 )
64 dataset = load_dataset(
65 path=self.path,
66 name=self.name,
(...)
73 num_proc=self.num_proc,
74 )
---> 76 yield from (
77 Document(
78 page_content=row.pop(self.page_content_column),
79 metadata=row,
80 )
81 for key in dataset.keys()
82 for row in dataset[key]
83 )
File [~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:77](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:77), in <genexpr>(.0)
59 raise ImportError(
60 "Could not import datasets python package. "
61 "Please install it with `pip install datasets`."
62 )
64 dataset = load_dataset(
65 path=self.path,
66 name=self.name,
(...)
73 num_proc=self.num_proc,
74 )
76 yield from (
---> 77 Document(
78 page_content=row.pop(self.page_content_column),
79 metadata=row,
80 )
81 for key in dataset.keys()
82 for row in dataset[key]
83 )
File [~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/load/serializable.py:75](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/load/serializable.py:75), in Serializable.__init__(self, **kwargs)
74 def __init__(self, **kwargs: Any) -> None:
---> 75 super().__init__(**kwargs)
76 self._lc_kwargs = kwargs
File [~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/pydantic/main.py:341](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/pydantic/main.py:341), in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for Document
page_content
str type expected (type=type_error.str)
### Expected behavior
Either extend class to handle more types of data or update docs. Will to do a PR to extend if your open. | https://github.com/langchain-ai/langchain/issues/10674 | https://github.com/langchain-ai/langchain/pull/13864 | 981f78f920d4e0514b7e627d9f3266afcccc9859 | 750485eaa8d6ca364f00454df4602abe2a8c9ba1 | "2023-09-16T10:49:37Z" | python | "2023-11-29T03:33:16Z" | libs/langchain/langchain/document_loaders/hugging_face_dataset.py | self,
) -> Iterator[Document]:
"""Load documents lazily."""
try:
from datasets import load_dataset
except ImportError:
raise ImportError(
"Could not import datasets python package. "
"Please install it with `pip install datasets`."
)
dataset = load_dataset(
path=self.path,
name=self.name,
data_dir=self.data_dir,
data_files=self.data_files,
cache_dir=self.cache_dir,
keep_in_memory=self.keep_in_memory,
save_infos=self.save_infos,
use_auth_token=self.use_auth_token,
num_proc=self.num_proc,
)
yield from (
Document(
page_content=row.pop(self.page_content_column),
metadata=row,
)
for key in dataset.keys()
for row in dataset[key]
)
def load(self) -> List[Document]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 10,674 | HuggingFace Data Loader fails when context is not str | ### System Info
langchain 0.0.285
python 3.11.4
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Try to load https://huggingface.co/datasets/hotpot_qa/viewer/fullwiki/validation
from langchain.document_loaders import HuggingFaceDatasetLoader
dataset_name = "hotpot_qa"
page_content_column = "context"
name = "fullwiki"
loader = HuggingFaceDatasetLoader(dataset_name, page_content_column, name)
docs = loader.load()
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
[/Users/deanchanter/Documents/GitHub/comma-chameleons/hello_doc_read.ipynb](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/hello_doc_read.ipynb) Cell 1 line 8
[4](vscode-notebook-cell:/Users/deanchanter/Documents/GitHub/comma-chameleons/hello_doc_read.ipynb#Y125sZmlsZQ%3D%3D?line=3) name = "fullwiki"
[7](vscode-notebook-cell:/Users/deanchanter/Documents/GitHub/comma-chameleons/hello_doc_read.ipynb#Y125sZmlsZQ%3D%3D?line=6) loader = HuggingFaceDatasetLoader(dataset_name, page_content_column, name)
----> [8](vscode-notebook-cell:/Users/deanchanter/Documents/GitHub/comma-chameleons/hello_doc_read.ipynb#Y125sZmlsZQ%3D%3D?line=7) docs = loader.load()
File [~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:87](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:87), in HuggingFaceDatasetLoader.load(self)
85 def load(self) -> List[Document]:
86 """Load documents."""
---> 87 return list(self.lazy_load())
File [~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:76](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:76), in HuggingFaceDatasetLoader.lazy_load(self)
59 raise ImportError(
60 "Could not import datasets python package. "
61 "Please install it with `pip install datasets`."
62 )
64 dataset = load_dataset(
65 path=self.path,
66 name=self.name,
(...)
73 num_proc=self.num_proc,
74 )
---> 76 yield from (
77 Document(
78 page_content=row.pop(self.page_content_column),
79 metadata=row,
80 )
81 for key in dataset.keys()
82 for row in dataset[key]
83 )
File [~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:77](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/document_loaders/hugging_face_dataset.py:77), in <genexpr>(.0)
59 raise ImportError(
60 "Could not import datasets python package. "
61 "Please install it with `pip install datasets`."
62 )
64 dataset = load_dataset(
65 path=self.path,
66 name=self.name,
(...)
73 num_proc=self.num_proc,
74 )
76 yield from (
---> 77 Document(
78 page_content=row.pop(self.page_content_column),
79 metadata=row,
80 )
81 for key in dataset.keys()
82 for row in dataset[key]
83 )
File [~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/load/serializable.py:75](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/langchain/load/serializable.py:75), in Serializable.__init__(self, **kwargs)
74 def __init__(self, **kwargs: Any) -> None:
---> 75 super().__init__(**kwargs)
76 self._lc_kwargs = kwargs
File [~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/pydantic/main.py:341](https://file+.vscode-resource.vscode-cdn.net/Users/deanchanter/Documents/GitHub/comma-chameleons/~/Documents/GitHub/comma-chameleons/env/lib/python3.11/site-packages/pydantic/main.py:341), in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for Document
page_content
str type expected (type=type_error.str)
### Expected behavior
Either extend class to handle more types of data or update docs. Will to do a PR to extend if your open. | https://github.com/langchain-ai/langchain/issues/10674 | https://github.com/langchain-ai/langchain/pull/13864 | 981f78f920d4e0514b7e627d9f3266afcccc9859 | 750485eaa8d6ca364f00454df4602abe2a8c9ba1 | "2023-09-16T10:49:37Z" | python | "2023-11-29T03:33:16Z" | libs/langchain/langchain/document_loaders/hugging_face_dataset.py | """Load documents."""
return list(self.lazy_load()) |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/parsers/pdf.py | """Module contains common parsers for PDFs."""
from __future__ import annotations
import warnings
from typing import (
TYPE_CHECKING,
Any,
Iterable,
Iterator,
Mapping,
Optional,
Sequence, |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/parsers/pdf.py | Union,
)
from urllib.parse import urlparse
import numpy as np
from langchain_core.documents import Document
from langchain.document_loaders.base import BaseBlobParser
from langchain.document_loaders.blob_loaders import Blob
if TYPE_CHECKING:
import fitz.fitz
import pdfminer.layout
import pdfplumber.page
import pypdf._page
import pypdfium2._helpers.page
_PDF_FILTER_WITH_LOSS = ["DCTDecode", "DCT", "JPXDecode"]
_PDF_FILTER_WITHOUT_LOSS = [
"LZWDecode",
"LZW",
"FlateDecode",
"Fl",
"ASCII85Decode",
"A85",
"ASCIIHexDecode",
"AHx",
"RunLengthDecode",
"RL",
"CCITTFaxDecode",
"CCF",
"JBIG2Decode",
]
def extract_from_images_with_rapidocr( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/parsers/pdf.py | images: Sequence[Union[Iterable[np.ndarray], bytes]]
) -> str:
"""Extract text from images with RapidOCR.
Args:
images: Images to extract text from.
Returns:
Text extracted from images.
Raises:
ImportError: If `rapidocr-onnxruntime` package is not installed.
"""
try:
from rapidocr_onnxruntime import RapidOCR
except ImportError:
raise ImportError(
"`rapidocr-onnxruntime` package not found, please install it with "
"`pip install rapidocr-onnxruntime`"
)
ocr = RapidOCR()
text = ""
for img in images:
result, _ = ocr(img)
if result:
result = [text[1] for text in result]
text += "\n".join(result)
return text
class PyPDFParser(BaseBlobParser): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/parsers/pdf.py | """Load `PDF` using `pypdf`"""
def __init__(
self, password: Optional[Union[str, bytes]] = None, extract_images: bool = False
):
self.password = password
self.extract_images = extract_images
def lazy_parse(self, blob: Blob) -> Iterator[Document]:
"""Lazily parse the blob."""
import pypdf
with blob.as_bytes_io() as pdf_file_obj:
pdf_reader = pypdf.PdfReader(pdf_file_obj, password=self.password)
yield from [
Document(
page_content=page.extract_text()
+ self._extract_images_from_page(page),
metadata={"source": blob.source, "page": page_number},
)
for page_number, page in enumerate(pdf_reader.pages)
]
def _extract_images_from_page(self, page: pypdf._page.PageObject) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/parsers/pdf.py | """Extract images from page and get the text with RapidOCR."""
if not self.extract_images or "/XObject" not in page["/Resources"].keys():
return ""
xObject = page["/Resources"]["/XObject"].get_object()
images = []
for obj in xObject:
if xObject[obj]["/Subtype"] == "/Image":
if xObject[obj]["/Filter"][1:] in _PDF_FILTER_WITHOUT_LOSS:
height, width = xObject[obj]["/Height"], xObject[obj]["/Width"]
images.append(
np.frombuffer(xObject[obj].get_data(), dtype=np.uint8).reshape(
height, width, -1
)
)
elif xObject[obj]["/Filter"][1:] in _PDF_FILTER_WITH_LOSS:
images.append(xObject[obj].get_data())
else:
warnings.warn("Unknown PDF Filter!")
return extract_from_images_with_rapidocr(images)
class PDFMinerParser(BaseBlobParser): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/parsers/pdf.py | """Parse `PDF` using `PDFMiner`."""
def __init__(self, extract_images: bool = False, *, concatenate_pages: bool = True):
"""Initialize a parser based on PDFMiner.
Args:
extract_images: Whether to extract images from PDF.
concatenate_pages: If True, concatenate all PDF pages into one a single
document. Otherwise, return one document per page.
"""
self.extract_images = extract_images
self.concatenate_pages = concatenate_pages
def lazy_parse(self, blob: Blob) -> Iterator[Document]:
"""Lazily parse the blob."""
if not self.extract_images:
from pdfminer.high_level import extract_text
with blob.as_bytes_io() as pdf_file_obj:
if self.concatenate_pages:
text = extract_text(pdf_file_obj)
metadata = {"source": blob.source}
yield Document(page_content=text, metadata=metadata)
else:
from pdfminer.pdfpage import PDFPage |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/parsers/pdf.py | pages = PDFPage.get_pages(pdf_file_obj)
for i, _ in enumerate(pages):
text = extract_text(pdf_file_obj, page_numbers=[i])
metadata = {"source": blob.source, "page": str(i)}
yield Document(page_content=text, metadata=metadata)
else:
import io
from pdfminer.converter import PDFPageAggregator, TextConverter
from pdfminer.layout import LAParams
from pdfminer.pdfinterp import PDFPageInterpreter, PDFResourceManager
from pdfminer.pdfpage import PDFPage
text_io = io.StringIO()
with blob.as_bytes_io() as pdf_file_obj:
pages = PDFPage.get_pages(pdf_file_obj)
rsrcmgr = PDFResourceManager()
device_for_text = TextConverter(rsrcmgr, text_io, laparams=LAParams())
device_for_image = PDFPageAggregator(rsrcmgr, laparams=LAParams())
interpreter_for_text = PDFPageInterpreter(rsrcmgr, device_for_text)
interpreter_for_image = PDFPageInterpreter(rsrcmgr, device_for_image)
for i, page in enumerate(pages):
interpreter_for_text.process_page(page)
interpreter_for_image.process_page(page)
content = text_io.getvalue() + self._extract_images_from_page(
device_for_image.get_result()
)
text_io.truncate(0)
text_io.seek(0)
metadata = {"source": blob.source, "page": str(i)}
yield Document(page_content=content, metadata=metadata)
def _extract_images_from_page(self, page: pdfminer.layout.LTPage) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/parsers/pdf.py | """Extract images from page and get the text with RapidOCR."""
import pdfminer
def get_image(layout_object: Any) -> Any:
if isinstance(layout_object, pdfminer.layout.LTImage):
return layout_object
if isinstance(layout_object, pdfminer.layout.LTContainer):
for child in layout_object:
return get_image(child)
else:
return None
images = []
for img in list(filter(bool, map(get_image, page))):
if img.stream["Filter"].name in _PDF_FILTER_WITHOUT_LOSS:
images.append(
np.frombuffer(img.stream.get_data(), dtype=np.uint8).reshape(
img.stream["Height"], img.stream["Width"], -1
)
)
elif img.stream["Filter"].name in _PDF_FILTER_WITH_LOSS:
images.append(img.stream.get_data())
else:
warnings.warn("Unknown PDF Filter!")
return extract_from_images_with_rapidocr(images)
class PyMuPDFParser(BaseBlobParser): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/parsers/pdf.py | """Parse `PDF` using `PyMuPDF`."""
def __init__(
self,
text_kwargs: Optional[Mapping[str, Any]] = None,
extract_images: bool = False,
) -> None:
"""Initialize the parser.
Args:
text_kwargs: Keyword arguments to pass to ``fitz.Page.get_text()``.
"""
self.text_kwargs = text_kwargs or {}
self.extract_images = extract_images
def lazy_parse(self, blob: Blob) -> Iterator[Document]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/parsers/pdf.py | """Lazily parse the blob."""
import fitz
with blob.as_bytes_io() as file_path:
doc = fitz.open(file_path)
yield from [
Document(
page_content=page.get_text(**self.text_kwargs)
+ self._extract_images_from_page(doc, page),
metadata=dict(
{
"source": blob.source,
"file_path": blob.source,
"page": page.number,
"total_pages": len(doc),
},
**{
k: doc.metadata[k]
for k in doc.metadata
if type(doc.metadata[k]) in [str, int]
},
),
)
for page in doc
]
def _extract_images_from_page( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/parsers/pdf.py | self, doc: fitz.fitz.Document, page: fitz.fitz.Page
) -> str:
"""Extract images from page and get the text with RapidOCR."""
if not self.extract_images:
return ""
import fitz
img_list = page.get_images()
imgs = []
for img in img_list:
xref = img[0]
pix = fitz.Pixmap(doc, xref)
imgs.append(
np.frombuffer(pix.samples, dtype=np.uint8).reshape(
pix.height, pix.width, -1
)
)
return extract_from_images_with_rapidocr(imgs)
class PyPDFium2Parser(BaseBlobParser):
"""Parse `PDF` with `PyPDFium2`."""
def __init__(self, extract_images: bool = False) -> None:
"""Initialize the parser."""
try:
import pypdfium2
except ImportError:
raise ImportError(
"pypdfium2 package not found, please install it with"
" `pip install pypdfium2`"
)
self.extract_images = extract_images
def lazy_parse(self, blob: Blob) -> Iterator[Document]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/parsers/pdf.py | """Lazily parse the blob."""
import pypdfium2
with blob.as_bytes_io() as file_path:
pdf_reader = pypdfium2.PdfDocument(file_path, autoclose=True)
try:
for page_number, page in enumerate(pdf_reader):
text_page = page.get_textpage()
content = text_page.get_text_range()
text_page.close()
content += "\n" + self._extract_images_from_page(page)
page.close()
metadata = {"source": blob.source, "page": page_number}
yield Document(page_content=content, metadata=metadata)
finally:
pdf_reader.close()
def _extract_images_from_page(self, page: pypdfium2._helpers.page.PdfPage) -> str:
"""Extract images from page and get the text with RapidOCR."""
if not self.extract_images:
return ""
import pypdfium2.raw as pdfium_c
images = list(page.get_objects(filter=(pdfium_c.FPDF_PAGEOBJ_IMAGE,)))
images = list(map(lambda x: x.get_bitmap().to_numpy(), images))
return extract_from_images_with_rapidocr(images)
class PDFPlumberParser(BaseBlobParser): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/parsers/pdf.py | """Parse `PDF` with `PDFPlumber`."""
def __init__(
self,
text_kwargs: Optional[Mapping[str, Any]] = None,
dedupe: bool = False,
extract_images: bool = False,
) -> None:
"""Initialize the parser.
Args:
text_kwargs: Keyword arguments to pass to ``pdfplumber.Page.extract_text()``
dedupe: Avoiding the error of duplicate characters if `dedupe=True`.
"""
self.text_kwargs = text_kwargs or {}
self.dedupe = dedupe
self.extract_images = extract_images
def lazy_parse(self, blob: Blob) -> Iterator[Document]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/parsers/pdf.py | """Lazily parse the blob."""
import pdfplumber
with blob.as_bytes_io() as file_path:
doc = pdfplumber.open(file_path)
yield from [
Document(
page_content=self._process_page_content(page)
+ "\n"
+ self._extract_images_from_page(page),
metadata=dict(
{
"source": blob.source,
"file_path": blob.source,
"page": page.page_number - 1,
"total_pages": len(doc.pages),
},
**{
k: doc.metadata[k]
for k in doc.metadata
if type(doc.metadata[k]) in [str, int]
},
),
)
for page in doc.pages
]
def _process_page_content(self, page: pdfplumber.page.Page) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/parsers/pdf.py | """Process the page content based on dedupe."""
if self.dedupe:
return page.dedupe_chars().extract_text(**self.text_kwargs)
return page.extract_text(**self.text_kwargs)
def _extract_images_from_page(self, page: pdfplumber.page.Page) -> str:
"""Extract images from page and get the text with RapidOCR."""
if not self.extract_images:
return ""
images = []
for img in page.images:
if img["stream"]["Filter"].name in _PDF_FILTER_WITHOUT_LOSS:
images.append(
np.frombuffer(img["stream"].get_data(), dtype=np.uint8).reshape(
img["stream"]["Height"], img["stream"]["Width"], -1
)
)
elif img["stream"]["Filter"].name in _PDF_FILTER_WITH_LOSS:
images.append(img["stream"].get_data())
else:
warnings.warn("Unknown PDF Filter!")
return extract_from_images_with_rapidocr(images)
class AmazonTextractPDFParser(BaseBlobParser): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/parsers/pdf.py | """Send `PDF` files to `Amazon Textract` and parse them.
For parsing multi-page PDFs, they have to reside on S3.
The AmazonTextractPDFLoader calls the
[Amazon Textract Service](https://aws.amazon.com/textract/)
to convert PDFs into a Document structure.
Single and multi-page documents are supported with up to 3000 pages
and 512 MB of size.
For the call to be successful an AWS account is required,
similar to the
[AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)
requirements.
Besides the AWS configuration, it is very similar to the other PDF
loaders, while also supporting JPEG, PNG and TIFF and non-native
PDF formats.
```python
from langchain.document_loaders import AmazonTextractPDFLoader
loader=AmazonTextractPDFLoader("example_data/alejandro_rosalez_sample-small.jpeg")
documents = loader.load()
```
One feature is the linearization of the output.
When using the features LAYOUT, FORMS or TABLES together with Textract
```python
from langchain.document_loaders import AmazonTextractPDFLoader
# you can mix and match each of the features |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/parsers/pdf.py | loader=AmazonTextractPDFLoader(
"example_data/alejandro_rosalez_sample-small.jpeg",
textract_features=["TABLES", "LAYOUT"])
documents = loader.load()
```
it will generate output that formats the text in reading order and
try to output the information in a tabular structure or
output the key/value pairs with a colon (key: value).
This helps most LLMs to achieve better accuracy when
processing these texts.
"""
def __init__(
self,
textract_features: Optional[Sequence[int]] = None,
client: Optional[Any] = None,
) -> None:
"""Initializes the parser.
Args:
textract_features: Features to be used for extraction, each feature
should be passed as an int that conforms to the enum
`Textract_Features`, see `amazon-textract-caller` pkg
client: boto3 textract client
"""
try:
import textractcaller as tc
import textractor.entities.document as textractor
self.tc = tc
self.textractor = textractor
if textract_features is not None:
self.textract_features = [ |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/parsers/pdf.py | tc.Textract_Features(f) for f in textract_features
]
else:
self.textract_features = []
except ImportError:
raise ImportError(
"Could not import amazon-textract-caller or "
"amazon-textract-textractor python package. Please install it "
"with `pip install amazon-textract-caller` & "
"`pip install amazon-textract-textractor`."
)
if not client:
try:
import boto3
self.boto3_textract_client = boto3.client("textract")
except ImportError:
raise ImportError(
"Could not import boto3 python package. "
"Please install it with `pip install boto3`."
)
else:
self.boto3_textract_client = client
def lazy_parse(self, blob: Blob) -> Iterator[Document]:
"""Iterates over the Blob pages and returns an Iterator with a Document
for each page, like the other parsers If multi-page document, blob.path
has to be set to the S3 URI and for single page docs
the blob.data is taken
"""
url_parse_result = urlparse(str(blob.path)) if blob.path else None |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/parsers/pdf.py | if (
url_parse_result
and url_parse_result.scheme == "s3"
and url_parse_result.netloc
):
textract_response_json = self.tc.call_textract(
input_document=str(blob.path),
features=self.textract_features,
boto3_textract_client=self.boto3_textract_client,
)
else:
textract_response_json = self.tc.call_textract(
input_document=blob.as_bytes(),
features=self.textract_features,
call_mode=self.tc.Textract_Call_Mode.FORCE_SYNC,
boto3_textract_client=self.boto3_textract_client,
)
document = self.textractor.Document.open(textract_response_json)
linearizer_config = self.textractor.TextLinearizationConfig(
hide_figure_layout=True,
title_prefix="# ",
section_header_prefix="## ",
list_element_prefix="*",
)
for idx, page in enumerate(document.pages):
yield Document(
page_content=page.get_text(config=linearizer_config),
metadata={"source": blob.source, "page": idx + 1},
)
class DocumentIntelligenceParser(BaseBlobParser): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/parsers/pdf.py | """Loads a PDF with Azure Document Intelligence
(formerly Forms Recognizer) and chunks at character level."""
def __init__(self, client: Any, model: str):
self.client = client
self.model = model
def _generate_docs(self, blob: Blob, result: Any) -> Iterator[Document]:
for p in result.pages:
content = " ".join([line.content for line in p.lines])
d = Document(
page_content=content,
metadata={
"source": blob.source,
"page": p.page_number,
},
)
yield d
def lazy_parse(self, blob: Blob) -> Iterator[Document]:
"""Lazily parse the blob."""
with blob.as_bytes_io() as file_obj:
poller = self.client.begin_analyze_document(self.model, file_obj)
result = poller.result()
docs = self._generate_docs(blob, result)
yield from docs |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/pdf.py | import json
import logging
import os
import tempfile
import time
from abc import ABC
from io import StringIO
from pathlib import Path
from typing import Any, Dict, Iterator, List, Mapping, Optional, Sequence, Union
from urllib.parse import urlparse
import requests
from langchain_core.documents import Document
from langchain.document_loaders.base import BaseLoader
from langchain.document_loaders.blob_loaders import Blob
from langchain.document_loaders.parsers.pdf import (
AmazonTextractPDFParser,
DocumentIntelligenceParser,
PDFMinerParser,
PDFPlumberParser,
PyMuPDFParser,
PyPDFium2Parser,
PyPDFParser,
)
from langchain.document_loaders.unstructured import UnstructuredFileLoader
from langchain.utils import get_from_dict_or_env
logger = logging.getLogger(__file__)
class UnstructuredPDFLoader(UnstructuredFileLoader): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/pdf.py | """Load `PDF` files using `Unstructured`.
You can run the loader in one of two modes: "single" and "elements".
If you use "single" mode, the document will be returned as a single
langchain Document object. If you use "elements" mode, the unstructured
library will split the document into elements such as Title and NarrativeText.
You can pass in additional unstructured kwargs after mode to apply
different unstructured settings.
Examples
--------
from langchain.document_loaders import UnstructuredPDFLoader
loader = UnstructuredPDFLoader(
"example.pdf", mode="elements", strategy="fast",
)
docs = loader.load()
References
----------
https://unstructured-io.github.io/unstructured/bricks.html#partition-pdf
"""
def _get_elements(self) -> List:
from unstructured.partition.pdf import partition_pdf
return partition_pdf(filename=self.file_path, **self.unstructured_kwargs)
class BasePDFLoader(BaseLoader, ABC):
"""Base Loader class for `PDF` files.
If the file is a web path, it will download it to a temporary file, use it, then
clean up the temporary file after completion.
"""
def __init__(self, file_path: str, *, headers: Optional[Dict] = None): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/pdf.py | """Initialize with a file path.
Args:
file_path: Either a local, S3 or web path to a PDF file.
headers: Headers to use for GET request to download a file from a web path.
"""
self.file_path = file_path
self.web_path = None
self.headers = headers
if "~" in self.file_path:
self.file_path = os.path.expanduser(self.file_path)
if not os.path.isfile(self.file_path) and self._is_valid_url(self.file_path):
self.temp_dir = tempfile.TemporaryDirectory()
_, suffix = os.path.splitext(self.file_path)
temp_pdf = os.path.join(self.temp_dir.name, f"tmp{suffix}")
self.web_path = self.file_path
if not self._is_s3_url(self.file_path):
r = requests.get(self.file_path, headers=self.headers)
if r.status_code != 200:
raise ValueError(
"Check the url of your file; returned status code %s"
% r.status_code
)
with open(temp_pdf, mode="wb") as f:
f.write(r.content)
self.file_path = str(temp_pdf)
elif not os.path.isfile(self.file_path):
raise ValueError("File path %s is not a valid file or url" % self.file_path)
def __del__(self) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/pdf.py | if hasattr(self, "temp_dir"):
self.temp_dir.cleanup()
@staticmethod
def _is_valid_url(url: str) -> bool:
"""Check if the url is valid."""
parsed = urlparse(url)
return bool(parsed.netloc) and bool(parsed.scheme)
@staticmethod
def _is_s3_url(url: str) -> bool:
"""check if the url is S3"""
try:
result = urlparse(url)
if result.scheme == "s3" and result.netloc:
return True
return False
except ValueError:
return False
@property
def source(self) -> str:
return self.web_path if self.web_path is not None else self.file_path
class OnlinePDFLoader(BasePDFLoader): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/pdf.py | """Load online `PDF`."""
def load(self) -> List[Document]:
"""Load documents."""
loader = UnstructuredPDFLoader(str(self.file_path))
return loader.load()
class PyPDFLoader(BasePDFLoader):
"""Load PDF using pypdf into list of documents.
Loader chunks by page and stores page numbers in metadata.
"""
def __init__(
self,
file_path: str,
password: Optional[Union[str, bytes]] = None,
headers: Optional[Dict] = None,
extract_images: bool = False,
) -> None:
"""Initialize with a file path."""
try:
import pypdf
except ImportError:
raise ImportError(
"pypdf package not found, please install it with " "`pip install pypdf`"
)
super().__init__(file_path, headers=headers)
self.parser = PyPDFParser(password=password, extract_images=extract_images)
def load(self) -> List[Document]:
"""Load given path as pages."""
return list(self.lazy_load())
def lazy_load( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/pdf.py | self,
) -> Iterator[Document]:
"""Lazy load given path as pages."""
if self.web_path:
blob = Blob.from_data(open(self.file_path, "rb").read(), path=self.web_path)
else:
blob = Blob.from_path(self.file_path)
yield from self.parser.parse(blob)
class PyPDFium2Loader(BasePDFLoader):
"""Load `PDF` using `pypdfium2` and chunks at character level."""
def __init__(
self,
file_path: str,
*,
headers: Optional[Dict] = None,
extract_images: bool = False,
):
"""Initialize with a file path."""
super().__init__(file_path, headers=headers)
self.parser = PyPDFium2Parser(extract_images=extract_images)
def load(self) -> List[Document]:
"""Load given path as pages."""
return list(self.lazy_load())
def lazy_load(
self,
) -> Iterator[Document]:
"""Lazy load given path as pages."""
blob = Blob.from_path(self.file_path)
yield from self.parser.parse(blob)
class PyPDFDirectoryLoader(BaseLoader): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/pdf.py | """Load a directory with `PDF` files using `pypdf` and chunks at character level.
Loader also stores page numbers in metadata.
"""
def __init__(
self,
path: str,
glob: str = "**/[!.]*.pdf",
silent_errors: bool = False,
load_hidden: bool = False,
recursive: bool = False,
extract_images: bool = False,
):
self.path = path
self.glob = glob
self.load_hidden = load_hidden
self.recursive = recursive
self.silent_errors = silent_errors
self.extract_images = extract_images
@staticmethod
def _is_visible(path: Path) -> bool: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/pdf.py | return not any(part.startswith(".") for part in path.parts)
def load(self) -> List[Document]:
p = Path(self.path)
docs = []
items = p.rglob(self.glob) if self.recursive else p.glob(self.glob)
for i in items:
if i.is_file():
if self._is_visible(i.relative_to(p)) or self.load_hidden:
try:
loader = PyPDFLoader(str(i), extract_images=self.extract_images)
sub_docs = loader.load()
for doc in sub_docs:
doc.metadata["source"] = str(i)
docs.extend(sub_docs)
except Exception as e:
if self.silent_errors:
logger.warning(e)
else:
raise e
return docs
class PDFMinerLoader(BasePDFLoader): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/pdf.py | """Load `PDF` files using `PDFMiner`."""
def __init__(
self,
file_path: str,
*,
headers: Optional[Dict] = None,
extract_images: bool = False,
concatenate_pages: bool = True,
) -> None:
"""Initialize with file path.
Args:
extract_images: Whether to extract images from PDF.
concatenate_pages: If True, concatenate all PDF pages into one a single
document. Otherwise, return one document per page.
"""
try:
from pdfminer.high_level import extract_text
except ImportError:
raise ImportError(
"`pdfminer` package not found, please install it with "
"`pip install pdfminer.six`"
)
super().__init__(file_path, headers=headers)
self.parser = PDFMinerParser(
extract_images=extract_images, concatenate_pages=concatenate_pages
)
def load(self) -> List[Document]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/pdf.py | """Eagerly load the content."""
return list(self.lazy_load())
def lazy_load(
self,
) -> Iterator[Document]:
"""Lazily load documents."""
blob = Blob.from_path(self.file_path)
yield from self.parser.parse(blob)
class PDFMinerPDFasHTMLLoader(BasePDFLoader): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/pdf.py | """Load `PDF` files as HTML content using `PDFMiner`."""
def __init__(self, file_path: str, *, headers: Optional[Dict] = None):
"""Initialize with a file path."""
try:
from pdfminer.high_level import extract_text_to_fp
except ImportError:
raise ImportError(
"`pdfminer` package not found, please install it with "
"`pip install pdfminer.six`"
)
super().__init__(file_path, headers=headers)
def load(self) -> List[Document]:
"""Load file."""
from pdfminer.high_level import extract_text_to_fp
from pdfminer.layout import LAParams
from pdfminer.utils import open_filename
output_string = StringIO()
with open_filename(self.file_path, "rb") as fp:
extract_text_to_fp(
fp,
output_string,
codec="",
laparams=LAParams(),
output_type="html",
)
metadata = {"source": self.file_path}
return [Document(page_content=output_string.getvalue(), metadata=metadata)]
class PyMuPDFLoader(BasePDFLoader): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/pdf.py | """Load `PDF` files using `PyMuPDF`."""
def __init__(
self,
file_path: str,
*,
headers: Optional[Dict] = None,
extract_images: bool = False,
**kwargs: Any,
) -> None:
"""Initialize with a file path."""
try:
import fitz
except ImportError:
raise ImportError(
"`PyMuPDF` package not found, please install it with "
"`pip install pymupdf`"
)
super().__init__(file_path, headers=headers)
self.extract_images = extract_images
self.text_kwargs = kwargs
def load(self, **kwargs: Any) -> List[Document]: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/pdf.py | """Load file."""
if kwargs:
logger.warning(
f"Received runtime arguments {kwargs}. Passing runtime args to `load`"
f" is deprecated. Please pass arguments during initialization instead."
)
text_kwargs = {**self.text_kwargs, **kwargs}
parser = PyMuPDFParser(
text_kwargs=text_kwargs, extract_images=self.extract_images
)
blob = Blob.from_path(self.file_path)
return parser.parse(blob)
class MathpixPDFLoader(BasePDFLoader): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/pdf.py | """Load `PDF` files using `Mathpix` service."""
def __init__(
self,
file_path: str,
processed_file_format: str = "md",
max_wait_time_seconds: int = 500,
should_clean_pdf: bool = False,
extra_request_data: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> None:
"""Initialize with a file path.
Args:
file_path: a file for loading.
processed_file_format: a format of the processed file. Default is "md".
max_wait_time_seconds: a maximum time to wait for the response from
the server. Default is 500.
should_clean_pdf: a flag to clean the PDF file. Default is False.
extra_request_data: Additional request data.
**kwargs: additional keyword arguments.
"""
self.mathpix_api_key = get_from_dict_or_env(
kwargs, "mathpix_api_key", "MATHPIX_API_KEY"
)
self.mathpix_api_id = get_from_dict_or_env(
kwargs, "mathpix_api_id", "MATHPIX_API_ID"
)
super().__init__(file_path, **kwargs)
self.processed_file_format = processed_file_format
self.extra_request_data = ( |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/pdf.py | extra_request_data if extra_request_data is not None else {}
)
self.max_wait_time_seconds = max_wait_time_seconds
self.should_clean_pdf = should_clean_pdf
@property
def _mathpix_headers(self) -> Dict[str, str]:
return {"app_id": self.mathpix_api_id, "app_key": self.mathpix_api_key}
@property
def url(self) -> str:
return "https://api.mathpix.com/v3/pdf"
@property
def data(self) -> dict:
options = {
"conversion_formats": {self.processed_file_format: True},
**self.extra_request_data,
}
return {"options_json": json.dumps(options)}
def send_pdf(self) -> str:
with open(self.file_path, "rb") as f:
files = {"file": f}
response = requests.post(
self.url, headers=self._mathpix_headers, files=files, data=self.data
)
response_data = response.json()
if "pdf_id" in response_data:
pdf_id = response_data["pdf_id"]
return pdf_id
else:
raise ValueError("Unable to send PDF to Mathpix.")
def wait_for_processing(self, pdf_id: str) -> None: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/pdf.py | """Wait for processing to complete.
Args:
pdf_id: a PDF id.
Returns: None
"""
url = self.url + "/" + pdf_id
for _ in range(0, self.max_wait_time_seconds, 5):
response = requests.get(url, headers=self.headers)
response_data = response.json()
status = response_data.get("status", None)
if status == "completed":
return
elif status == "error":
raise ValueError("Unable to retrieve PDF from Mathpix")
else:
print(f"Status: {status}, waiting for processing to complete")
time.sleep(5)
raise TimeoutError
def get_processed_pdf(self, pdf_id: str) -> str:
self.wait_for_processing(pdf_id)
url = f"{self.url}/{pdf_id}.{self.processed_file_format}"
response = requests.get(url, headers=self.headers)
return response.content.decode("utf-8")
def clean_pdf(self, contents: str) -> str: |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/pdf.py | """Clean the PDF file.
Args:
contents: a PDF file contents.
Returns:
"""
contents = "\n".join(
[line for line in contents.split("\n") if not line.startswith("![]")]
)
contents = contents.replace("\\section{", "# ").replace("}", "")
contents = (
contents.replace(r"\$", "$")
.replace(r"\%", "%")
.replace(r"\(", "(")
.replace(r"\)", ")")
)
return contents
def load(self) -> List[Document]:
pdf_id = self.send_pdf()
contents = self.get_processed_pdf(pdf_id)
if self.should_clean_pdf:
contents = self.clean_pdf(contents)
metadata = {"source": self.source, "file_path": self.source}
return [Document(page_content=contents, metadata=metadata)]
class PDFPlumberLoader(BasePDFLoader): |
closed | langchain-ai/langchain | https://github.com/langchain-ai/langchain | 7,034 | Loading online PDFs gives temporary file path as source in metadata | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | https://github.com/langchain-ai/langchain/issues/7034 | https://github.com/langchain-ai/langchain/pull/13274 | 6f64cb5078bb71007d25fff847541fd8f7713c0c | 9bd6e9df365e966938979511237c035a02fb4fa9 | "2023-07-01T23:24:53Z" | python | "2023-11-29T20:07:46Z" | libs/langchain/langchain/document_loaders/pdf.py | """Load `PDF` files using `pdfplumber`."""
def __init__(
self,
file_path: str,
text_kwargs: Optional[Mapping[str, Any]] = None,
dedupe: bool = False,
headers: Optional[Dict] = None,
extract_images: bool = False,
) -> None:
"""Initialize with a file path."""
try:
import pdfplumber
except ImportError:
raise ImportError(
"pdfplumber package not found, please install it with "
"`pip install pdfplumber`"
)
super().__init__(file_path, headers=headers)
self.text_kwargs = text_kwargs or {}
self.dedupe = dedupe
self.extract_images = extract_images
def load(self) -> List[Document]: |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.