issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I want to connect my ConversationChain to Metaphor. Please help me on how to do it! Thanks
### Suggestion:
_No response_ | How to connect an LLM Chain to Metaphor | https://api.github.com/repos/langchain-ai/langchain/issues/14112/comments | 1 | 2023-12-01T06:47:10Z | 2024-03-16T16:08:41Z | https://github.com/langchain-ai/langchain/issues/14112 | 2,020,223,271 | 14,112 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
The link to subscribe to the "newsletter" at the bottom of Community page (`/docs/docs/community.md`) is no longer available.

As the image above indicates, the "form has moved to a new address", and the new address (https://form.typeform.com/to/KjZB1auB?typeform-source=6w1pwbss0py.typeform.com) gives "This typeform is now closed", as the image shown below:

### Idea or request for content:
If the newsletter is still operating, replace the current link with a new one where people can subscribe to it.
If the newsletter is no longer running, replace the text with something else (eg. the link to langchain blogs). | DOC: Link to subscribe to the "newsletter" on Community page is no longer valid | https://api.github.com/repos/langchain-ai/langchain/issues/14108/comments | 1 | 2023-12-01T04:52:55Z | 2024-02-25T15:14:52Z | https://github.com/langchain-ai/langchain/issues/14108 | 2,020,059,736 | 14,108 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
i am using load_qa_chain()
chain=load_qa_chain(OpenAI(temperature=0.25,model_name="gpt-3.5-turbo-1106"), chain_type="stuff", prompt=PROMPT)
i got the following error
"""
APIRemovedInV1:
You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0 - see the README at
https://github.com/openai/openai-python for the API.
You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface.
Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`
A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742
"""
and the line that raised the error is
anaconda3/envs/Doc2/lib/site-packages/langchain/llms/openai.py:115
### Suggestion:
_No response_ | Issue: openai.ChatCompletion is no longer supported in openai>=1.0.0 | https://api.github.com/repos/langchain-ai/langchain/issues/14107/comments | 4 | 2023-12-01T03:57:52Z | 2023-12-11T05:58:02Z | https://github.com/langchain-ai/langchain/issues/14107 | 2,020,008,509 | 14,107 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
langchain_core.runnables.base.RunnableMap
Im trying use this and i want to know is there any way that i can give different input for each chain
### Suggestion:
_No response_ | Runnablemap different input for each chain | https://api.github.com/repos/langchain-ai/langchain/issues/14098/comments | 6 | 2023-12-01T01:09:07Z | 2024-04-12T13:52:30Z | https://github.com/langchain-ai/langchain/issues/14098 | 2,019,840,343 | 14,098 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain 0.0.343
langchain-core 0.0.7
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
retriever = AzureCognitiveSearchRetriever(content_key="content", top_k=5)
llm = AzureChatOpenAI(
openai_api_version="2023-05-15",
azure_deployment="larger-chat-35",
)
template = """
I want you to act as a research assistant. you will answer questions about the context.
Context: {context}
Question: {question}?
"""
prompt = PromptTemplate(
input_variables=["question", "context"],
template=template,
)
compressor = LLMChainExtractor.from_llm(llm)
compression_retriever = ContextualCompressionRetriever(base_compressors=compressor,
base_retriever=retriever)
```
### Expected behavior
I'd expect the compression_retriever to be instantiated. But instead I receive the following error:
```
pydantic.v1.error_wrappers.ValidationError: 1 validation error for ContextualCompressionRetriever
base_compressor
field required (type=value_error.missing)
``` | Pydantic error field required (type=value_error.missing) in LLMChainExtractor or ContextualCompressionRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/14088/comments | 3 | 2023-11-30T20:53:24Z | 2023-11-30T22:24:23Z | https://github.com/langchain-ai/langchain/issues/14088 | 2,019,492,749 | 14,088 |
[
"hwchase17",
"langchain"
] | ### Feature request
The `invoke()` method for RunnableWithFallbacks is only raising the first_error caught in the runnables. I would like to have an option to choose whether to raise the first error or last error.
Can this be a parameter for `invoke()`?
### Motivation
When using fallbacks, the first error could be related to the first Runnable. However, the error I'm actually interested to handle is the last error from the end of my fallbacks.
The last error is the most downstream error from my chains, and it's the error I want to handle in my app business logic.
### Your contribution
N/A | RunnableWithFallbacks should have a parameter to choose whether to raise first error or last error | https://api.github.com/repos/langchain-ai/langchain/issues/14085/comments | 1 | 2023-11-30T20:12:47Z | 2024-03-16T16:08:31Z | https://github.com/langchain-ai/langchain/issues/14085 | 2,019,431,096 | 14,085 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain== 0.0.340
pydantic==2.4.2
pydantic_core==2.10.1
### Who can help?
@hwchase17 @eyur
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.schema import BaseOutputParser
from pydantic import PrivateAttr
class MyParser(BaseOutputParser[list[int]]):
_list: list[int] = PrivateAttr(default_factory=list)
def parse(self, text: str) -> list[int]:
return self._list
parser = MyParser()
assert parser.parse("test") == []
```
### Expected behavior
I expect `PrivateAttr` to still work within `BaseOutputParser`. | `BaseOutputParser` breaks pydantic `PrivateAttr` | https://api.github.com/repos/langchain-ai/langchain/issues/14084/comments | 4 | 2023-11-30T19:56:09Z | 2023-12-02T18:33:06Z | https://github.com/langchain-ai/langchain/issues/14084 | 2,019,400,383 | 14,084 |
[
"hwchase17",
"langchain"
] | ### System Info
in AWS langchain Bedrock, how do I set temperature?
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Tried putting it as a parameter of Bedrock and gave error that such parameter does not exist. Then put is as paramenters of kwargs and it had no effect
### Expected behavior
Expected this to be a direct parameter of Bedrock, but it seems that it is not | in AWS langchain Bedrock, how do I set temperature? | https://api.github.com/repos/langchain-ai/langchain/issues/14083/comments | 19 | 2023-11-30T19:51:15Z | 2024-05-23T10:16:06Z | https://github.com/langchain-ai/langchain/issues/14083 | 2,019,393,730 | 14,083 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am using ConversationalRetrievalChain. I have created two vector stores and I want the query from ConversationalRetrievalChain to be compared against both vector stores and results from both vector stores to be used to create the final answer.
So I have decided to create two retrievers
retriever1 = vectorstore1.as_retriever()
retriever2 = vectorstore2.as_retriever()
How can I now override the Retriever class so that when the query is compared against my custom_retriever, it is compared against documents from both retrievers, and documents from both retrievers are used to create the prompt.
Note: i don't want to merge the vectorstores because that messes up the similarity search.
### Suggestion:
_No response_ | Combine langchain retrievers | https://api.github.com/repos/langchain-ai/langchain/issues/14082/comments | 9 | 2023-11-30T19:39:09Z | 2024-03-18T16:07:04Z | https://github.com/langchain-ai/langchain/issues/14082 | 2,019,376,935 | 14,082 |
[
"hwchase17",
"langchain"
] | ### System Info
Mac OS
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When I am using Agent, random seed doesn't work.
model_kwargs = {
"seed": 2139
}
llm_agent = ChatOpenAI(temperature=0,
openai_api_key='sk-',
model_name='gpt-4-1106-preview',
model_kwargs=model_kwargs)
llm_cypher = ChatOpenAI(temperature=0,
openai_api_key='sk-',
model_name='gpt-4-1106-preview',
model_kwargs=model_kwargs)
llm_graph = ChatOpenAI(temperature=0,
openai_api_key='sk-',
model_name='gpt-4-1106-preview',
model_kwargs=model_kwargs)
tools = [
Tool(
name="GraphDB Search tool",
func=chain,
description=tool_description
)
]
chain = CustomGraphCypherQAChain.from_llm(
top_k=100,
llm=llm_graph,
return_direct=True, # this return the observation from cypher query directly without rethinking over observation
return_intermediate_steps=True, # this is only returned when it's call by chain() not by chain.run()
cypher_llm=llm_cypher,
graph=graph,
verbose=True,
cypher_prompt=CYPHER_GENERATION_PROMPT
)
def init_agent():
agent = initialize_agent(
tools,
llm_agent,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
handle_errors=True,
handle_parsing_errors=True,
return_intermediate_steps=True,
max_iterations=10,
)
return agent
this agent don't reproduce for same input
### Expected behavior
I expect Agent keep resulting in same answer for the same input prompt | Random Seed when using LLM Agent | https://api.github.com/repos/langchain-ai/langchain/issues/14080/comments | 1 | 2023-11-30T18:56:50Z | 2024-03-17T16:07:12Z | https://github.com/langchain-ai/langchain/issues/14080 | 2,019,317,039 | 14,080 |
[
"hwchase17",
"langchain"
] | ### Feature request
Support for Perplexity's new [PPLX models](https://blog.perplexity.ai/blog/introducing-pplx-online-llms).
### Motivation
Boosting online LLM support, particularly for Perplexity-based models, would be highly impactful and useful.
### Your contribution
Currently working on a PR! | Support for Perplexity's PPLX models: `pplx-7b-online` and `pplx-70b-online` | https://api.github.com/repos/langchain-ai/langchain/issues/14079/comments | 2 | 2023-11-30T18:41:11Z | 2024-05-01T16:05:38Z | https://github.com/langchain-ai/langchain/issues/14079 | 2,019,294,151 | 14,079 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hello everyone,
We are having request timeout on our calls and they are being getting retries after failing but we want to see how many times doing these retries and all their logs.
When the requests are timing out, nothing has been sending callbackhandler and we want to see how many retries had been done under the ChatOpenAI LLM model.
### Suggestion:
_No response_ | MyCallbackHandler can't show logs explicitly | https://api.github.com/repos/langchain-ai/langchain/issues/14077/comments | 1 | 2023-11-30T18:20:20Z | 2024-03-16T16:08:11Z | https://github.com/langchain-ai/langchain/issues/14077 | 2,019,257,671 | 14,077 |
[
"hwchase17",
"langchain"
] | ### System Info
Regardless of the text length, the QAGenerationChain consistently generates only one question.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.chains import QAGenerationChain
from langchain.llms import OpenAI
# Initialize the language model
lm = OpenAI()
# Create the QA Generator Chain
qa_chain = QAGenerationChain.from_llm(llm=lm)
qa_chain.k = 4
# Example usage
context = """
Introduction
LangChain is a framework for developing applications powered by language models. It enables applications that:
Are context-aware: connect a language model to sources of context (prompt instructions, few shot examples, content to ground its response in, etc.)
Reason: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.)
This framework consists of several parts.
LangChain Libraries: The Python and JavaScript libraries. Contains interfaces and integrations for a myriad of components, a basic run time for combining these components into chains and agents, and off-the-shelf implementations of chains and agents.
LangChain Templates: A collection of easily deployable reference architectures for a wide variety of tasks.
LangServe: A library for deploying LangChain chains as a REST API.
LangSmith: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain.
LangChain Diagram
Together, these products simplify the entire application lifecycle:
Develop: Write your applications in LangChain/LangChain.js. Hit the ground running using Templates for reference.
Productionize: Use LangSmith to inspect, test and monitor your chains, so that you can constantly improve and deploy with confidence.
Deploy: Turn any chain into an API with LangServe.
LangChain Libraries
The main value props of the LangChain packages are:
Components: composable tools and integrations for working with language models. Components are modular and easy-to-use, whether you are using the rest of the LangChain framework or not
Off-the-shelf chains: built-in assemblages of components for accomplishing higher-level tasks
Off-the-shelf chains make it easy to get started. Components make it easy to customize existing chains and build new ones.
Get started
Here’s how to install LangChain, set up your environment, and start building.
We recommend following our Quickstart guide to familiarize yourself with the framework by building your first LangChain application.
Read up on our Security best practices to make sure you're developing safely with LangChain.
NOTE
These docs focus on the Python LangChain library. Head here for docs on the JavaScript LangChain library.
LangChain Expression Language (LCEL)
LCEL is a declarative way to compose chains. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains.
Overview: LCEL and its benefits
Interface: The standard interface for LCEL objects
How-to: Key features of LCEL
Cookbook: Example code for accomplishing common tasks
Modules
LangChain provides standard, extendable interfaces and integrations for the following modules:
Model I/O
Interface with language models
Retrieval
Interface with application-specific data
Agents
Let models choose which tools to use given high-level directives
Examples, ecosystem, and resources
Use cases
Walkthroughs and techniques for common end-to-end use cases, like:
Document question answering
Chatbots
Analyzing structured data
and much more...
Integrations
LangChain is part of a rich ecosystem of tools that integrate with our framework and build on top of it. Check out our growing list of integrations.
Guides
Best practices for developing with LangChain.
API reference
Head to the reference section for full documentation of all classes and methods in the LangChain and LangChain Experimental Python packages.
Developer's guide
Check out the developer's guide for guidelines on contributing and help getting your dev environment set up.
Community
Head to the Community navigator to find places to ask questions, share feedback, meet other developers, and dream about the future of LLM’s.
"""
questions = qa_chain.run(context)
print(questions)
```
output
```
[{'question': 'What are the main value props of the LangChain packages?', 'answer': 'The main value props of the LangChain packages are composable tools and integrations for working with language models, off-the-shelf chains for accomplishing higher-level tasks, and the ability to easily deploy chains as a REST API.'}]
```
### Expected behavior
I expect to specify the number of responses by setting the 'k' value. | Regardless of the text length, the QAGenerationChain consistently generates only one question. | https://api.github.com/repos/langchain-ai/langchain/issues/14074/comments | 1 | 2023-11-30T17:21:42Z | 2024-03-16T16:08:06Z | https://github.com/langchain-ai/langchain/issues/14074 | 2,019,146,581 | 14,074 |
[
"hwchase17",
"langchain"
] | ### System Info
LangChain Version: 0.0.339
ChromaDB 0.4.18
sentence-transformers 2.2.1
Python 3.12.0
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a requirements file with these entries:
langchain
chromadb
sentence-transformers
Run pip install -r requirements.txt
### Expected behavior
I expect the install to complete correctly. | Dependency Issues with sentence-transformers and chromadb | https://api.github.com/repos/langchain-ai/langchain/issues/14073/comments | 1 | 2023-11-30T15:52:09Z | 2024-03-16T16:08:01Z | https://github.com/langchain-ai/langchain/issues/14073 | 2,018,961,211 | 14,073 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.11.5
langchain 0.0.343
langchain-core 0.0.7
sqlalchemy 2.0.21
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
# make a little database
import sqlite3
con = sqlite3.connect("test.db")
cur = con.cursor()
cur.execute("DROP TABLE IF EXISTS test_table")
cur.execute("CREATE TABLE test_table(a, b, c, d)")
cur.execute("INSERT INTO test_table VALUES (1,2,3,4)")
cur.execute("INSERT INTO test_table VALUES (4,5,6,7)")
cur.execute("INSERT INTO test_table VALUES (8,9,10,11)")
con.commit()
# then...
from langchain.sql_database import SQLDatabase
db = SQLDatabase.from_uri("sqlite:///test.db", include_tables=["test_table"], sample_rows_in_table_info=2)
print(db.table_info)
---------------------------------------------------------------------------
OperationalError Traceback (most recent call last)
File ~/miniconda3/envs/forScience/lib/python3.11/site-packages/sqlalchemy/engine/base.py:1965, in Connection._exec_single_context(self, dialect, context, statement, parameters)
1964 if not evt_handled:
-> 1965 self.dialect.do_execute(
1966 cursor, str_statement, effective_parameters, context
1967 )
1969 if self._has_events or self.engine._has_events:
File ~/miniconda3/envs/forScience/lib/python3.11/site-packages/sqlalchemy/engine/default.py:921, in DefaultDialect.do_execute(self, cursor, statement, parameters, context)
920 def do_execute(self, cursor, statement, parameters, context=None):
--> 921 cursor.execute(statement, parameters)
OperationalError: near "FROM": syntax error
The above exception was the direct cause of the following exception:
OperationalError Traceback (most recent call last)
Cell In[23], line 3
1 from langchain.sql_database import SQLDatabase
2 db = SQLDatabase.from_uri("sqlite:///test.db", include_tables=["test_table"], sample_rows_in_table_info=2)
----> 3 print(db.table_info)
File ~/miniconda3/envs/forScience/lib/python3.11/site-packages/langchain/utilities/sql_database.py:286, in SQLDatabase.table_info(self)
283 @property
284 def table_info(self) -> str:
285 """Information about all tables in the database."""
--> 286 return self.get_table_info()
File ~/miniconda3/envs/forScience/lib/python3.11/site-packages/langchain/utilities/sql_database.py:334, in SQLDatabase.get_table_info(self, table_names)
332 table_info += f"\n{self._get_table_indexes(table)}\n"
333 if self._sample_rows_in_table_info:
--> 334 table_info += f"\n{self._get_sample_rows(table)}\n"
335 if has_extra_info:
336 table_info += "*/"
File ~/miniconda3/envs/forScience/lib/python3.11/site-packages/langchain/utilities/sql_database.py:357, in SQLDatabase._get_sample_rows(self, table)
354 try:
355 # get the sample rows
356 with self._engine.connect() as connection:
--> 357 sample_rows_result = connection.execute(command) # type: ignore
358 # shorten values in the sample rows
359 sample_rows = list(
360 map(lambda ls: [str(i)[:100] for i in ls], sample_rows_result)
361 )
File ~/miniconda3/envs/forScience/lib/python3.11/site-packages/sqlalchemy/engine/base.py:1412, in Connection.execute(self, statement, parameters, execution_options)
1410 raise exc.ObjectNotExecutableError(statement) from err
1411 else:
-> 1412 return meth(
1413 self,
1414 distilled_parameters,
1415 execution_options or NO_OPTIONS,
1416 )
File ~/miniconda3/envs/forScience/lib/python3.11/site-packages/sqlalchemy/sql/elements.py:516, in ClauseElement._execute_on_connection(self, connection, distilled_params, execution_options)
514 if TYPE_CHECKING:
515 assert isinstance(self, Executable)
--> 516 return connection._execute_clauseelement(
517 self, distilled_params, execution_options
518 )
519 else:
520 raise exc.ObjectNotExecutableError(self)
File ~/miniconda3/envs/forScience/lib/python3.11/site-packages/sqlalchemy/engine/base.py:1635, in Connection._execute_clauseelement(self, elem, distilled_parameters, execution_options)
1623 compiled_cache: Optional[CompiledCacheType] = execution_options.get(
1624 "compiled_cache", self.engine._compiled_cache
1625 )
1627 compiled_sql, extracted_params, cache_hit = elem._compile_w_cache(
1628 dialect=dialect,
1629 compiled_cache=compiled_cache,
(...)
1633 linting=self.dialect.compiler_linting | compiler.WARN_LINTING,
1634 )
-> 1635 ret = self._execute_context(
1636 dialect,
1637 dialect.execution_ctx_cls._init_compiled,
1638 compiled_sql,
1639 distilled_parameters,
1640 execution_options,
1641 compiled_sql,
1642 distilled_parameters,
1643 elem,
1644 extracted_params,
1645 cache_hit=cache_hit,
1646 )
1647 if has_events:
1648 self.dispatch.after_execute(
1649 self,
1650 elem,
(...)
1654 ret,
1655 )
File ~/miniconda3/envs/forScience/lib/python3.11/site-packages/sqlalchemy/engine/base.py:1844, in Connection._execute_context(self, dialect, constructor, statement, parameters, execution_options, *args, **kw)
1839 return self._exec_insertmany_context(
1840 dialect,
1841 context,
1842 )
1843 else:
-> 1844 return self._exec_single_context(
1845 dialect, context, statement, parameters
1846 )
File ~/miniconda3/envs/forScience/lib/python3.11/site-packages/sqlalchemy/engine/base.py:1984, in Connection._exec_single_context(self, dialect, context, statement, parameters)
1981 result = context._setup_result_proxy()
1983 except BaseException as e:
-> 1984 self._handle_dbapi_exception(
1985 e, str_statement, effective_parameters, cursor, context
1986 )
1988 return result
File ~/miniconda3/envs/forScience/lib/python3.11/site-packages/sqlalchemy/engine/base.py:2339, in Connection._handle_dbapi_exception(self, e, statement, parameters, cursor, context, is_sub_exec)
2337 elif should_wrap:
2338 assert sqlalchemy_exception is not None
-> 2339 raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
2340 else:
2341 assert exc_info[1] is not None
File ~/miniconda3/envs/forScience/lib/python3.11/site-packages/sqlalchemy/engine/base.py:1965, in Connection._exec_single_context(self, dialect, context, statement, parameters)
1963 break
1964 if not evt_handled:
-> 1965 self.dialect.do_execute(
1966 cursor, str_statement, effective_parameters, context
1967 )
1969 if self._has_events or self.engine._has_events:
1970 self.dispatch.after_cursor_execute(
1971 self,
1972 cursor,
(...)
1976 context.executemany,
1977 )
File ~/miniconda3/envs/forScience/lib/python3.11/site-packages/sqlalchemy/engine/default.py:921, in DefaultDialect.do_execute(self, cursor, statement, parameters, context)
920 def do_execute(self, cursor, statement, parameters, context=None):
--> 921 cursor.execute(statement, parameters)
OperationalError: (sqlite3.OperationalError) near "FROM": syntax error
[SQL: SELECT
FROM test_table
LIMIT ? OFFSET ?]
[parameters: (2, 0)]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
```
### Expected behavior
From https://python.langchain.com/docs/use_cases/qa_structured/sql I expect:
```
CREATE TABLE "test_table" (
"a" INTEGER,
"b" INTEGER,
"c" INTEGER,
"d" INTEGER
)
/*
2 rows from test_table table:
a b c d
1 2 3 4
5 6 7 8
*/
```
| SQLDatabase.get_table_info is not returning useful information | https://api.github.com/repos/langchain-ai/langchain/issues/14071/comments | 5 | 2023-11-30T15:03:55Z | 2023-12-03T10:30:35Z | https://github.com/langchain-ai/langchain/issues/14071 | 2,018,865,714 | 14,071 |
[
"hwchase17",
"langchain"
] | ### System Info
When I use below snippet of code
```
import os
from azure.identity import DefaultAzureCredential
from azure.identity import get_bearer_token_provider
from langchain.llms import AzureOpenAI
from langchain.chat_models import AzureChatOpenAI
credential = DefaultAzureCredential(interactive_browser_tenant_id=tenant_id,
interactive_browser_client_id=client_id,
client_secret=client_secret)
token_provider = get_bearer_token_provider(credential, "https://cognitiveservices.azure.com/.default")
endpoint = "https://xxxx.openai.azure.com"
client = AzureOpenAI( azure_endpoint=endpoint,
api_version="2023-05-15",
azure_deployment="example-gpt-4",
azure_ad_token_provider=token_provider)
```
I get error :
```---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[36], line 21
18 # api_version = "2023-05-15"
19 endpoint = "https://xxxx.openai.azure.com"
---> 21 client = AzureOpenAI(
22 azure_endpoint=endpoint,
23 api_version="2023-05-15",
24 azure_deployment="example-gpt-4",
25 azure_ad_token_provider=token_provider,
26 )
File ~/PycharmProjects/aicc/env/lib/python3.9/site-packages/langchain_core/load/serializable.py:97, in Serializable.__init__(self, **kwargs)
96 def __init__(self, **kwargs: Any) -> None:
---> 97 super().__init__(**kwargs)
98 self._lc_kwargs = kwargs
File ~/PycharmProjects/aicc/env/lib/python3.9/site-packages/pydantic/v1/main.py:339, in BaseModel.__init__(__pydantic_self__, **data)
333 """
334 Create a new model by parsing and validating input data from keyword arguments.
335
336 Raises ValidationError if the input data cannot be parsed to form a valid model.
337 """
338 # Uses something other than `self` the first arg to allow "self" as a settable attribute
--> 339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
341 raise validation_error
File ~/PycharmProjects/aicc/env/lib/python3.9/site-packages/pydantic/v1/main.py:1102, in validate_model(model, input_data, cls)
1100 continue
1101 try:
-> 1102 values = validator(cls_, values)
1103 except (ValueError, TypeError, AssertionError) as exc:
1104 errors.append(ErrorWrapper(exc, loc=ROOT_KEY))
File ~/PycharmProjects/aicc/env/lib/python3.9/site-packages/langchain/llms/openai.py:887, in AzureOpenAI.validate_environment(cls, values)
877 values["openai_api_base"] += (
878 "/deployments/" + values["deployment_name"]
879 )
880 values["deployment_name"] = None
881 client_params = {
882 "api_version": values["openai_api_version"],
883 "azure_endpoint": values["azure_endpoint"],
884 "azure_deployment": values["deployment_name"],
885 "api_key": values["openai_api_key"],
886 "azure_ad_token": values["azure_ad_token"],
--> 887 "azure_ad_token_provider": values["azure_ad_token_provider"],
888 "organization": values["openai_organization"],
889 "base_url": values["openai_api_base"],
890 "timeout": values["request_timeout"],
891 "max_retries": values["max_retries"],
892 "default_headers": values["default_headers"],
893 "default_query": values["default_query"],
894 "http_client": values["http_client"],
895 }
896 values["client"] = openai.AzureOpenAI(**client_params).completions
897 values["async_client"] = openai.AsyncAzureOpenAI(
898 **client_params
899 ).completions
KeyError: 'azure_ad_token_provider'
```
Ive also tried AzureChatOpenAI , and I get the same error back.
The error is not reproduced when I use openai library AzureOpenAI .
Also on openai the azure_ad_token_provider has type azure_ad_token_provider: 'AzureADTokenProvider | None' = None while in langchain it has type azure_ad_token_provider: Optional[str] = None which also makes me wonder if it would take as input a different type than string to work with.
any ideas on how to fix this? Im actually using Azure Service principal authentication, and if I use as alternative field azure_ad_token = credential.get_token(“https://cognitiveservices.azure.com/.default”).token I get token expired after 60min which does not happen with a bearer token, so It is important to me to make the token_provider work.
libraries :
pydantic 1.10.12
pydantic_core 2.10.1
openai 1.2.0
langchain 0.0.342
langchain-core 0.0.7
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import os
from azure.identity import DefaultAzureCredential
from azure.identity import get_bearer_token_provider
from langchain.llms import AzureOpenAI
from langchain.chat_models import AzureChatOpenAI
credential = DefaultAzureCredential(interactive_browser_tenant_id=tenant_id,
interactive_browser_client_id=client_id,
client_secret=client_secret)
token_provider = get_bearer_token_provider(credential, "https://cognitiveservices.azure.com/.default")
endpoint = "https://xxxx.openai.azure.com"
client = AzureOpenAI( azure_endpoint=endpoint,
api_version="2023-05-15",
azure_deployment="example-gpt-4",
azure_ad_token_provider=token_provider)
### Expected behavior
client = AzureOpenAI( azure_endpoint=endpoint,
api_version="2023-05-15",
azure_deployment="example-gpt-4",
azure_ad_token_provider=token_provider)
should return a Runnable instance which I can use for LLMChain | AzureOpenAI azure_ad_token_provider Keyerror | https://api.github.com/repos/langchain-ai/langchain/issues/14069/comments | 6 | 2023-11-30T13:39:55Z | 2023-12-05T23:54:12Z | https://github.com/langchain-ai/langchain/issues/14069 | 2,018,694,209 | 14,069 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
At end of [summarization documentation](https://python.langchain.com/docs/use_cases/summarization) the output is shown with a ValueError. Line that fail:
```python
summarize_document_chain.run(docs[0])
```
### Idea or request for content:
Looks like the docs[0] object is not what excepected. | DOC: Summarization output broken | https://api.github.com/repos/langchain-ai/langchain/issues/14066/comments | 0 | 2023-11-30T11:35:27Z | 2024-03-18T16:06:59Z | https://github.com/langchain-ai/langchain/issues/14066 | 2,018,460,541 | 14,066 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
When using `langchain.chat_models` such as `ChatAnthropic` and `ChatOpenAI`, I am seeing a bunch of HTTP logs that look like the following:
`2023-11-30 16:43:09 httpx INFO: HTTP Request: POST https://api.anthropic.com/v1/complete "HTTP[/1.1] 200 OK"`
Any suggestions on how to disable them?
### Suggestion:
_No response_ | Disable HTTP Request logging in Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14065/comments | 3 | 2023-11-30T11:22:43Z | 2024-07-26T11:15:51Z | https://github.com/langchain-ai/langchain/issues/14065 | 2,018,435,817 | 14,065 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I've created a custom tool to query the equity daily quote, I would like to have two parameters in the tool function: stock_name and trade_date, and I would expect LLM Agent will automatically decide how many days quotes or which day's quote it should retrieve.
```
class SearchSchema(BaseModel):
stock_name: str = Field(description="should be the name of the equity.")
trade_date: str = Field(description="should be the trading date of the equity.")
class DailyQuoteSearch(BaseTool):
name = "equity daily quote search"
description = "useful for equity daily quote retrieval"
return_direct = False
args_schema: Type[SearchSchema] = SearchSchema
def _run(self, stock_name: str, trade_date: str) -> str:
output = self.query_equity_daily_quote(stock_name, trade_date)
return output
```
However, LLM doesn't seem to know how to pass in the trade date? Is it possible to have LLM think about the what trade_date should be passed? or I expect too much on agent intelligence?
```
pydantic_core._pydantic_core.ValidationError: 1 validation error for SearchSchema
trade_date
Field required [type=missing, input_value={'stock_name': 'xxxx'}, input_type=dict]
```
### Suggestion:
_No response_ | How to have agent decide what param should be pass into a tool? | https://api.github.com/repos/langchain-ai/langchain/issues/14064/comments | 2 | 2023-11-30T10:31:15Z | 2024-03-17T16:06:57Z | https://github.com/langchain-ai/langchain/issues/14064 | 2,018,340,971 | 14,064 |
[
"hwchase17",
"langchain"
] | ### System Info
I am using gpt-4 deployed on AzureOpenAI.
I want to get the model used. From openai, I will get the model we used. But when I tried with langchain I got an older model.
Basically, I am integrating with other services. So it is essential to get the model name intact for the cost calculations.
What could be the issue with the langcahin?
It works great with openai chat completion.
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
```
import os
from openai import AzureOpenAI
client = AzureOpenAI(
azure_endpoint = "{}",
api_key="{}",
api_version="{}",
azure_deployment="gpt4",
)
response = client.chat.completions.create(
model="gpt4",
messages=[{"role": "user", "content": "tell me a joke"}]
)
response.model
```
> 'gpt-4'
```
from langchain.schema import HumanMessage
from langchain.chat_models.azure_openai import AzureChatOpenAI
llm = AzureChatOpenAI(
azure_endpoint = "{}",
openai_api_key="{}",
openai_api_version="{}",
azure_deployment="gpt4",
model_version="0613",
temperature=0
)
response = llm.generate(messages=[[HumanMessage(content="tell me a joke")]])
response.llm_output["model_name"]
```
> 'gpt-3.5-turbo'
```
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
prompt = PromptTemplate(
input_variables=["joke"],
template="Tell me a {joke}?",
)
chain = LLMChain(llm=llm, prompt=prompt)
response = chain.generate([{"joke":"joke"}])
response.llm_output["model_name"]
```
> 'gpt-3.5-turbo'
### Expected behavior
From all we should be getting gpt-4 as I have deployed only got-4 on azure openai | Azure OpenAI, gpt-4 model returns gpt-35-turbo | https://api.github.com/repos/langchain-ai/langchain/issues/14062/comments | 3 | 2023-11-30T09:38:58Z | 2023-12-01T07:08:48Z | https://github.com/langchain-ai/langchain/issues/14062 | 2,018,246,092 | 14,062 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I want to customize a LLM (a GPT-3.5-turbo) for chatting but don't proceed with chatting until the user provides his name and his phone number and validates the count of his phone number.
Please help me I tried many methods but without any significant result
### Suggestion:
_No response_ | Don't proceed with chatting until the user provides his name and phone number | https://api.github.com/repos/langchain-ai/langchain/issues/14057/comments | 5 | 2023-11-30T03:53:16Z | 2023-12-10T16:35:42Z | https://github.com/langchain-ai/langchain/issues/14057 | 2,017,823,760 | 14,057 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Do you have a new url?
https://python.langchain.com/docs/modules/chains/popular/
thank you
### Suggestion:
_No response_ | Issue: I can't visit the page of popular chains | https://api.github.com/repos/langchain-ai/langchain/issues/14054/comments | 1 | 2023-11-30T02:16:12Z | 2024-03-17T16:06:51Z | https://github.com/langchain-ai/langchain/issues/14054 | 2,017,743,973 | 14,054 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.11 running locally in PyCharm.
### Who can help?
@hwchase17 @agola11
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
def research_tool(input, thread_id=None):
# Initialize Tavily Search
search = TavilySearchAPIWrapper()
tavily_tool = TavilySearchResults(api_wrapper=search)
salesforce_history_tool = create_pinecone_tool()
# Initialize PlayWright Web Browser
sync_browser = create_sync_playwright_browser()
toolkit = PlayWrightBrowserToolkit.from_browser(sync_browser=sync_browser)
# Initialize the Toolkit
tools = toolkit.get_tools()
tools.append(tavily_tool) # Add Tavily Search to the Toolkit
tools.append(salesforce_history_tool) # Add Salesforce History to the Toolkit
tools.extend(sf_tools) # Add Salesforce Tools to the Toolkit
agent = OpenAIAssistantRunnable.create_assistant(
name="Research Assistant",
instructions="You are a personal research assistant on company information",
tools=tools,
model="gpt-4-1106-preview",
as_agent=True,
)
agent_executor = AgentExecutor(agent=agent, tools=tools)
if thread_id:
result = agent_executor.invoke({"content": input, "thread_id": thread_id})
else:
result = agent_executor.invoke({"content": input})
output = result['output']
thread_id = result['thread_id']
return output, thread_id
```
### Expected behavior
I am looking to have my agent run using the Assistants API. Instead, I receive the following error:
```
[chain/start] [1:chain:AgentExecutor > 4:chain:OpenAIAssistantRunnable] Entering Chain run with input:
[inputs]
[chain/error] [1:chain:AgentExecutor > 4:chain:OpenAIAssistantRunnable] [315ms] Chain run errored with error:
"BadRequestError(\"Error code: 400 - {'error': {'message': '1 validation error for Request\\\\nbody -> tool_outputs -> 0 -> output\\\\n str type expected (type=type_error.str)', 'type': 'invalid_request_error', 'param': None, 'code': None}}\")"
``` | OpenAIAssistantRunnable input validation error | https://api.github.com/repos/langchain-ai/langchain/issues/14050/comments | 2 | 2023-11-30T01:00:23Z | 2024-04-05T16:06:49Z | https://github.com/langchain-ai/langchain/issues/14050 | 2,017,681,132 | 14,050 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.10, mac OS
### Who can help?
@agola11 @hwchase17
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
With this script
```
from langchain.chains import RefineDocumentsChain, LLMChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
document_prompt = PromptTemplate.from_template("{page_content}")
document_variable_name = "context"
llm = OpenAI()
# The prompt here should take as an input variable the
# `document_variable_name`
prompt = PromptTemplate.from_template(
"Summarize this content: {context}"
)
initial_llm_chain = LLMChain(llm=llm, prompt=prompt)
initial_response_name = "prev_response"
# The prompt here should take as an input variable the
# `document_variable_name` as well as `initial_response_name`
prompt_refine = PromptTemplate.from_template(
"Here's your first summary: {prev_response}. "
"Now add to it based on the following context: {context}"
)
refine_llm_chain = LLMChain(llm=llm, prompt=prompt_refine)
chain = RefineDocumentsChain(
initial_llm_chain=initial_llm_chain,
refine_llm_chain=refine_llm_chain,
document_prompt=document_prompt,
document_variable_name=document_variable_name,
initial_response_name=initial_response_name,
)
from langchain.schema import Document
text = """Nuclear power in space is the use of nuclear power in outer space, typically either small fission systems or radioactive decay for electricity or heat. Another use is for scientific observation, as in a Mössbauer spectrometer. The most common type is a radioisotope thermoelectric generator, which has been used on many space probes and on crewed lunar missions. Small fission reactors for Earth observation satellites, such as the TOPAZ nuclear reactor, have also been flown.[1] A radioisotope heater unit is powered by radioactive decay and can keep components from becoming too cold to function, potentially over a span of decades.[2]
The United States tested the SNAP-10A nuclear reactor in space for 43 days in 1965,[3] with the next test of a nuclear reactor power system intended for space use occurring on 13 September 2012 with the Demonstration Using Flattop Fission (DUFF) test of the Kilopower reactor.[4]
After a ground-based test of the experimental 1965 Romashka reactor, which used uranium and direct thermoelectric conversion to electricity,[5] the USSR sent about 40 nuclear-electric satellites into space, mostly powered by the BES-5 reactor. The more powerful TOPAZ-II reactor produced 10 kilowatts of electricity.[3]
Examples of concepts that use nuclear power for space propulsion systems include the nuclear electric rocket (nuclear powered ion thruster(s)), the radioisotope rocket, and radioisotope electric propulsion (REP).[6] One of the more explored concepts is the nuclear thermal rocket, which was ground tested in the NERVA program. Nuclear pulse propulsion was the subject of Project Orion.[7]
"""
docs = [
Document(
page_content=split,
metadata={"source": "https://en.wikipedia.org/wiki/Nuclear_power_in_space"},
)
for split in text.split("\n\n")
]
```
When I attach a callback that prints the [inputs](https://github.com/langchain-ai/langchain/blob/00a6e8962cc778cd8f6268cefc304465598c02cf/libs/core/langchain_core/callbacks/base.py#L197) in `on_chain_start`, even if every sub-chain only uses one Document, the `inputs` is always the full Document list.
### Expected behavior
I expect the `inputs` in `on_chain_start` only includes the inputs that are used by this chain, not always the full list, otherwise it's totally meaningless to duplicate the same inputs many times. | Callback.on_chain_start has wrong "inputs" in RefineDocumentsChain | https://api.github.com/repos/langchain-ai/langchain/issues/14048/comments | 1 | 2023-11-29T23:49:27Z | 2023-11-30T19:01:31Z | https://github.com/langchain-ai/langchain/issues/14048 | 2,017,623,699 | 14,048 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
The current [section on Amazon OpenSearch Serverless][1] (AOSS) vector store uses the `AWS4Auth` class to authenticate to AOSS, yet the official [OpenSearch documentation][2] suggests using the `AWS4SignerAuth` class instead.
Further, the notebook lacks information on where to import the `AWS4Auth` class from and how to configure it with different AWS credentials (static access key/secret key, temporary credentials, etc.). It also lacks references on how to configure access policies (IAM, AOSS data access policies, etc.)
[1]: https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/vectorstores/opensearch.ipynb
[2]: https://opensearch.org/docs/latest/clients/python-low-level/#connecting-to-amazon-opensearch-serverless
### Idea or request for content:
Add installation instructions for the [requests_aws4auth][1] package and links to its [Github repo][2] in order to showcase configuration with different AWS credentials. Additionally reference [AWS documentation for AOSS][3] in order to get started [setting up permissions][4]
[1]: https://pypi.org/project/requests-aws4auth/
[2]: https://github.com/tedder/requests-aws4auth#basic-usage
[3]: https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-getting-started.html
[4]: https://docs.aws.amazon.com/opensearch-service/latest/developerguide/serverless-data-access.html | DOC: Expand documentation on how to authenticate and connect to Amazon OpenSearch Serverless | https://api.github.com/repos/langchain-ai/langchain/issues/14042/comments | 2 | 2023-11-29T20:15:29Z | 2024-05-01T16:05:33Z | https://github.com/langchain-ai/langchain/issues/14042 | 2,017,348,030 | 14,042 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I only see `metadata` as parameter for few callback handler methods, but I would like to have access to metadata in other methods such as `on_chain_error()` and `on_llm_error()`.
Currently, I can only see the `tags` in these methods.
I have error handling in my callback handler using `on_chain_error()` and I'd like to add information in my metadata dictionary to my exceptions (such as LLM name, model, things related to my chain, etc...).
I can put a list of tags, but I'd much prefer to use a dictionary to get certain keys and have my exceptions instantiated properly.
### Suggestion:
Please make both tags and metadata available for all callback handler methods. | Missing metadata in some callback handler methods | https://api.github.com/repos/langchain-ai/langchain/issues/14041/comments | 2 | 2023-11-29T20:08:28Z | 2024-03-13T21:58:22Z | https://github.com/langchain-ai/langchain/issues/14041 | 2,017,338,609 | 14,041 |
[
"hwchase17",
"langchain"
] | ### Feature request
I notice the other functions, such as add_texts and add_embeddings allow you to pass a unique list of IDs that can get paired with your embedding. There is no such parameter for the add_documents function. This means when you delete a document and add an updated version of it using add_documents, its unique ID won't be added to the vector store.
### Motivation
Inability to add unique ID to document after calling the add_documents function from FAISS
### Your contribution
Not sure as far as I know. If I can, let me know what you need from me. | Add Optional Parameter for Unique IDs in FAISS.add_documents Function | https://api.github.com/repos/langchain-ai/langchain/issues/14038/comments | 1 | 2023-11-29T18:48:20Z | 2024-03-13T21:58:17Z | https://github.com/langchain-ai/langchain/issues/14038 | 2,017,209,201 | 14,038 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Would be great to add the support for Azure GPT latest models in the get_openai_callback() --> https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/callbacks/openai_info.py
https://github.com/langchain-ai/langchain/issues/12994
### Suggestion:
Please add the Azure GPT models (latest) https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/callbacks/openai_info.py | Issue: support for Azure Open AI latest GPT models like, GPT 4 turpo in the get_openai_callback() | https://api.github.com/repos/langchain-ai/langchain/issues/14036/comments | 1 | 2023-11-29T18:08:30Z | 2024-03-13T21:58:12Z | https://github.com/langchain-ai/langchain/issues/14036 | 2,017,151,134 | 14,036 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Is it possible to convert a Conversation Chain into an Agent?
### Suggestion:
_No response_ | ConversationChain to Agent | https://api.github.com/repos/langchain-ai/langchain/issues/14034/comments | 2 | 2023-11-29T17:07:22Z | 2024-05-22T16:07:43Z | https://github.com/langchain-ai/langchain/issues/14034 | 2,017,049,508 | 14,034 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I want to integrate ConversationalRetrievalChain with history into Gradio app.
For now I have the following approach:
```
# Memory buffer
memory = ConversationBufferWindowMemory(k=2,memory_key="chat_history", return_messages=True)
# LLM chain
chain = ConversationalRetrievalChain.from_llm(llm=llm, chain_type='stuff',
retriever=vector_store.as_retriever(
search_kwargs={"k": 2}),
memory=memory)
with gr.Blocks() as demo:
gr.Markdown("# SageMaker Docs Chat 🤗")
gr.Markdown("### Ask me question about Amazon SageMaker!")
chatbot = gr.Chatbot(label="Chat history")
message = gr.Textbox(label="Ask me a question!")
clear = gr.Button("Clear")
def user(user_message, chat_history):
return gr.update(value="", interactive=False), chat_history + [[user_message, None]]
def bot(chat_history):
user_message = chat_history[-1][0]
llm_response = qa({"question": user_message})
bot_message = llm_response["answer"]
chat_history[-1][1] = ""
for character in bot_message:
chat_history[-1][1] += character
time.sleep(0.005)
yield chat_history
response = message.submit(user, [message, chatbot], [message, chatbot], queue=False).then(
bot, chatbot, chatbot
)
response.then(lambda: gr.update(interactive=True), None, [message], queue=False)
demo.queue()
demo.launch()
```
which works fine for the simple question answer without history. I tried to implement something similar to this guide (https://github.com/facebookresearch/llama-recipes/blob/main/demo_apps/Llama2_Gradio.ipynb), but failed to do so. Do you have any solution for this use-case or any specific guide?
### Suggestion:
_No response_ | Optimal Integration of the ConversationalRetrievalChain (Open source llama-2) into gradio. | https://api.github.com/repos/langchain-ai/langchain/issues/14033/comments | 1 | 2023-11-29T16:50:09Z | 2024-03-13T20:00:16Z | https://github.com/langchain-ai/langchain/issues/14033 | 2,017,018,704 | 14,033 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.330, python 3.9
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I created an agent as follows:
```
def batch_embedding(node_label:str, text_node_properties:list):
vector_index = Neo4jVector.from_existing_graph(
OpenAIEmbeddings(),
url=URL_DB_GRAPH,
username=USERNAME,
password=PASSWORD,
index_name= node_label,
node_label= node_label,
text_node_properties=text_node_properties,
embedding_node_property='embedding',
)
return vector_index
```
model_name= "gpt-4"
llm= ChatOpenAI(temperature=0, model_name=model_name)
vector_index = batch_embedding("Alarms", ["solution", "description", "type"])
vector_qa = RetrievalQA.from_chain_type(
llm=llm, chain_type="stuff", retriever=vector_index.as_retriever())
```
cypher_chain = GraphCypherQAChain.from_llm(
cypher_llm = llm,
qa_llm = ChatOpenAI(temperature=0), graph=graph, verbose=True, cypher_prompt=CYPHER_GENERATION_PROMPT
)
```
tools = [
Tool(
name="Alarms",
func=vector_qa.run,
description=prompt_per_alarms,
),
Tool(
name="Graph",
func=cypher_chain.run,
description=prompt_per_Graph,
),
]
mrkl = initialize_agent(
tools, ChatOpenAI(temperature=0, model_name=model_name), agent=AgentType.OPENAI_FUNCTIONS, verbose=True, memory=memory
)
message = request.form['message']
response = mrkl.run(message)
When I receive an answer from cypher_chain.run I can see that I've a full context with an output but the finished chain says "I'm sorry, but I don't have the information.." (see the image attached)

.
I noticed that this issue come back when I have a full context with an array of data.
### Expected behavior
Finished chain contains the full context and write the answer. | Finished chain without an answer but full context have results | https://api.github.com/repos/langchain-ai/langchain/issues/14031/comments | 3 | 2023-11-29T16:24:00Z | 2024-07-31T17:55:02Z | https://github.com/langchain-ai/langchain/issues/14031 | 2,016,963,828 | 14,031 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I am currently piecing together some of the tutorials on the langchain documentation page to use a CustomOutputParser and a CustomPrompt Template similar to the ZeroShotReact Template.
While parsing the actions, I have a scenario where, the model returns Action as None & Action Input as None. In that case, I would like access to the dynamically created prompt with in the CustomOutputParser to call another LLM and return the action as call to another LLM to complete the action.
**Current Approach:**
```python
# Set up a prompt template
class CustomPromptTemplatePirate(StringPromptTemplate):
# The template to use
template: str
# The list of tools available
tools: List[Tool]
def format(self, **kwargs) -> str:
# Get the intermediate steps (AgentAction, Observation tuples)
# Format them in a particular way
intermediate_steps = kwargs.pop("intermediate_steps")
thoughts = ""
for action, observation in intermediate_steps:
thoughts += action.log
thoughts += f"\nObservation: {observation}\nThought: "
# Set the agent_scratchpad variable to that value
kwargs["agent_scratchpad"] = thoughts
# Create a tools variable from the list of tools provided
kwargs["tools"] = "\n".join([f"{tool.name}: {tool.description}" for tool in self.tools])
# Create a list of tool names for the tools provided
kwargs["tool_names"] = ", ".join([tool.name for tool in self.tools])
return self.template.format(**kwargs)
class CustomOutputParserPirate(AgentOutputParser):
def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:
# Check if agent should finish
if "Final Answer:" in llm_output:
return AgentFinish(
# Return values is generally always a dictionary with a single `output` key
# It is not recommended to try anything else at the moment :)
return_values={"output": llm_output.split("Final Answer:")[-1].strip()},
log=llm_output,
)
# Parse out the action and action input
regex = r"Action\s*\d*\s*:(.*?)\nAction\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
match = re.search(regex, llm_output, re.DOTALL)
if not match:
raise OutputParserException(f"Could not parse LLM output: `{llm_output}`")
action = match.group(1).strip()
action_input = match.group(2)
# Return the action and action input
return AgentAction(tool=action, tool_input=action_input.strip(" ").strip('"'), log=llm_output)
# Set up the base template
template = """Complete the objective as best you can. You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
These were previous tasks you completed:
Begin!
Question: {input}
{agent_scratchpad}"""
search = SerpAPIWrapper()
tools= [Tool(
name="Search",
func=search.run,
description="useful for when you need to answer questions about current events",
)]
prompt = CustomPromptTemplatePirate(
template=template,
tools=tools,
# This omits the `agent_scratchpad`, `tools`, and `tool_names` variables because those are generated dynamically
# This includes the `intermediate_steps` variable because that is needed
input_variables=["input", "intermediate_steps"]
)
output_parser = CustomOutputParserPirate()
# LLM chain consisting of the LLM and a prompt
llm_chain = LLMChain(llm=llm, prompt=prompt)
tool_names = [tool.name for tool in tools]
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=output_parser,
stop=["\nObservation:"],
allowed_tools=tool_names
)
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True)
agent_executor.run("How many people live in canada as of 2023?")
```
**Challenges Faced:**
I am unable to figure out how to access the callback or any other similar approach to use the complete Prompt/User Input with in the context of the CustomOutputParser
**Desired Outcome:**
```python
def parse(self, llm_output: str, prompt:start) -> Union[AgentAction, AgentFinish]:
# Check if agent should finish
.....
if not 'Action' & not 'Action Input' :
call AgentAction with another llm fine_tuned defined as tool with the user question as input ex: Write a summary on Canada. I need a way to access the user question here.
```
I would greatly appreciate any advice, documentation, or examples that could assist me in accomplishing this task. Thank you very much for your time and support.
### Suggestion:
_No response_ | Issue: Request: Need Help with CustomAgentExecutor for Accessing Dynamically Created Prompts | https://api.github.com/repos/langchain-ai/langchain/issues/14027/comments | 3 | 2023-11-29T15:19:25Z | 2024-03-17T16:06:46Z | https://github.com/langchain-ai/langchain/issues/14027 | 2,016,824,570 | 14,027 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.10
Langchain 0.0.311
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.text_splitter import RecursiveCharacterTextSplitter
url = "https://plato.stanford.edu/entries/goedel/"
headers_to_split_on = [
("h1", "Header 1"),
("h2", "Header 2"),
("h3", "Header 3"),
("h4", "Header 4"),
]
html_splitter = HTMLHeaderTextSplitter(headers_to_split_on=headers_to_split_on)
# for local file use html_splitter.split_text_from_file(<path_to_file>)
html_header_splits = html_splitter.split_text_from_url(url)
chunk_size = 500
chunk_overlap = 30
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=chunk_size, chunk_overlap=chunk_overlap
)
# Split
splits = text_splitter.split_documents(html_header_splits)
splits[80:85]
```
[Reference](https://python.langchain.com/docs/modules/data_connection/document_transformers/text_splitters/HTML_header_metadata#2-pipelined-to-another-splitter-with-html-loaded-from-a-web-url)
The bug seems to be in [etree](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/text_splitter.py#L586). A simple fix is perhaps like below:
```python
from lxml import etree
from pathlib import Path
path = Path(".../langchain/document_transformers/xsl/html_chunks_with_headers.xslt")
# etree.parse(path) Throws
etree.parse(str(path))
```
### Expected behavior
The code in the reproducer should work. | HTMLHeaderTextSplitter throws TypeError: cannot parse from 'PosixPath' | https://api.github.com/repos/langchain-ai/langchain/issues/14024/comments | 1 | 2023-11-29T13:17:28Z | 2024-03-13T19:57:17Z | https://github.com/langchain-ai/langchain/issues/14024 | 2,016,575,594 | 14,024 |
[
"hwchase17",
"langchain"
] | > langchain-experimental : 0.0.42
> langchain : 0.0.340
> gpt4all : 2.0.2
> PostgreSQL : 15.5
I am trying to query my postgres db using the following code:
```
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_experimental.sql import SQLDatabaseChain
from langchain.memory import ConversationBufferMemory
from langchain import SQLDatabase
from langchain.llms import GPT4All
from langchain.prompts import PromptTemplate
from langchain.globals import set_verbose
import os
username = "postgres"
password = "password"
host = "127.0.0.1" # internal IP
port = "5432"
mydatabase = "reporting_db"
pg_uri = f'postgresql://{username}:{password}@{host}:{port}/{mydatabase}'
my_db = SQLDatabase.from_uri(pg_uri)
_DEFAULT_TEMPLATE = '''Given an input question, first create a syntactically correct {dialect} query to run,
then look at the results of the query and return the answer.
Use the following format:
Question: "Question here"
SQLQuery: "SQL Query to run"
SQLResult: "Result of the SQLQuery"
Answer: "Final answer here"
Only use the following tables:
{table_info}
If someone asks for the book written, they really mean the work table.
Question: {input}'''
PROMPT = PromptTemplate(
input_variables=["input", "table_info", "dialect"],
template=_DEFAULT_TEMPLATE
)
path = "/var/lib/postgresql/data/llama-2-7b.Q2_K.gguf"
callbacks = [StreamingStdOutCallbackHandler()]
llm = GPT4All(model = path,
callbacks=callbacks,
n_threads=8,
max_tokens=81920,
verbose=True
)
set_verbose(True)
db_chain = SQLDatabaseChain.from_llm(llm = llm,
db = my_db,
prompt = PROMPT,
use_query_checker=True,
verbose = True
)
question = 'Count the rows on table Access'
answer = db_chain(question)
print(answer)```
but I am getting the following error:
```ERROR: sqlalchemy.exc.ProgrammingError: (psycopg2.errors.SyntaxError) syntax error at or near "```"
LINE 1: ```sql
^
[SQL: ```sql
SELECT * FROM (
SELECT * FROM (
SELECT * FROM (
SELECT * FROM (
SELECT * FROM (
SELECT * FROM (
SELECT * FROM (
SELECT * FROM (
SELECT * FROM (
SELECT * FROM (
SELECT * FROM (
SELECT * FROM (
SELECT * FROM (
SELECT * FROM (
SELECT * FROM (
SELECT * FROM (
SELECT * FROM (
SELECT * FROM (
SELECT * FROM (
SELECT * FROM (
SELECT * FROM (
SELECT * FROM (
SELECT * FROM (
SELECT * FROM (
SELECT * FROM (
SELECT * FROM (
SELECT * FROM (]
```
### Suggestion:
_No response_ | Recursive query when connecting to postgres db | https://api.github.com/repos/langchain-ai/langchain/issues/14022/comments | 1 | 2023-11-29T10:55:01Z | 2024-03-13T19:55:54Z | https://github.com/langchain-ai/langchain/issues/14022 | 2,016,322,236 | 14,022 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi, I'm looking for some implementation that's utilizing a ConversationChain and giving it access to Internet. I want to integrate duckduckgo or bing-api or serpapi into my ConversationChain.
Would appreciate any help! Thanks
### Suggestion:
_No response_ | Unable to Integrate ConversationChain with Tools like duckduckgo or serp-api | https://api.github.com/repos/langchain-ai/langchain/issues/14021/comments | 3 | 2023-11-29T10:47:49Z | 2024-03-13T19:57:49Z | https://github.com/langchain-ai/langchain/issues/14021 | 2,016,308,490 | 14,021 |
[
"hwchase17",
"langchain"
] | ### System Info
IPython : 8.15.0
ipykernel : 6.25.0
ipywidgets : 8.0.4
jupyter_client : 7.4.9
jupyter_core : 5.5.0
jupyter_server : 1.23.4
jupyterlab : 3.5.3
nbclient : 0.8.0
nbconvert : 7.10.0
nbformat : 5.9.2
notebook : 6.5.4
qtconsole : 5.4.2
traitlets : 5.7.1
Python 3.11.6
Langchain '0.0.340'
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
TypeError: object Document can't be used in 'await' expression
translated_document = await qa_translator.atransform_documents(documents)
from official documentation notebook :
https://github.com/langchain-ai/langchain/blob/1cd9d5f3328e144cbe5d6ef52a22029d4fdf0cce/docs/docs/integrations/document_transformers/doctran_translate_document.ipynb
### Expected behavior
The problem might be arise due to specific python version or asyncio . | Doctran translate documents | https://api.github.com/repos/langchain-ai/langchain/issues/14020/comments | 2 | 2023-11-29T10:46:04Z | 2024-03-16T16:07:25Z | https://github.com/langchain-ai/langchain/issues/14020 | 2,016,305,395 | 14,020 |
[
"hwchase17",
"langchain"
] | ### Feature request
It would interesting to have the option tu run a OpenAIAssistantRunnable that has access to custom tools and OpenAI build in tools like code_intrepreter, vision and retrieval.
### Motivation
It would increase the developers capabilty of creating even more powerful agents.
### Your contribution
n/a | Ability to use custom tools and openai build in functions on OpenAIAssistantRunnable | https://api.github.com/repos/langchain-ai/langchain/issues/14019/comments | 1 | 2023-11-29T10:41:57Z | 2024-03-13T20:03:43Z | https://github.com/langchain-ai/langchain/issues/14019 | 2,016,297,886 | 14,019 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I use the below code to load data and splitted it and embedded it and finally pushing it into vector store. during that process,
I'm getting **openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}**. After this method failed, I also tried with AzurecosmosDBVectorsearch vector store it also failed and returned the same error. Kindly help on this.
```
from langchain.embeddings.azure_openai import AzureOpenAIEmbeddings
from langchain.vectorstores.azure_cosmos_db import AzureCosmosDBVectorSearch
from langchain.vectorstores.chroma import Chroma
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.document_loaders import PyPDFLoader
# Load PDF
loaders = [
PyPDFLoader("ai.pdf")
]
docs = []
for loader in loaders:
docs.extend(loader.load())
# Define the Text Splitter
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1500,
chunk_overlap=150
)
# Create a split of the document using the text splitter
res_splits = text_splitter.split_documents(docs)
embedding = AzureOpenAIEmbeddings(
openai_api_version="1699-02-30",
openai_api_key="xxxxxxxxxxxxxxxxxxxxxxxxx",
# model_name="gpt-35-turbo",
azure_endpoint="https://ggggggggggggggggggggggg.openai.azure.com/")
persist_directory = 'docs/chroma/'
# Create the vector store
vectordb = Chroma.from_documents(
documents=res_splits,
embedding=embedding,
persist_directory=persist_directory
)
print(vectordb._collection.count())
```
### Suggestion:
_No response_ | Issue: openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}} | https://api.github.com/repos/langchain-ai/langchain/issues/14018/comments | 8 | 2023-11-29T10:18:06Z | 2024-06-08T16:07:41Z | https://github.com/langchain-ai/langchain/issues/14018 | 2,016,254,745 | 14,018 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.10.10 [email protected]
langchain-core@ 0.0.7
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
md_file = docs[0].page_content
headers_to_split_on = [
("###", "Section"),
]
markdown_splitter = MarkdownHeaderTextSplitter(headers_to_split_on=headers_to_split_on)
md_header_splits = markdown_splitter.split_text(md_file)
chunk_size = 500
chunk_overlap = 0
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=chunk_size, chunk_overlap=chunk_overlap
)
all_splits = text_splitter.split_documents(md_header_splits)
vectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())
metadata_field_info = [
AttributeInfo(
name="Section",
description="Part of the document that the text comes from",
type="string or list[string]",
),
]
document_content_description = "Major sections of the document"
# Define self query retriever
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True
)
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
qa_chain = RetrievalQA.from_chain_type(llm, retriever=retriever)
qa_chain.run("衰老有哪些因素?")
### Expected behavior
how can I fix the problem | when I run the demo from the cookbook,I get error Error code: 404 - {'error': {'message': 'The model `text-davinci-003` does not exist1', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}} | https://api.github.com/repos/langchain-ai/langchain/issues/14017/comments | 1 | 2023-11-29T08:45:42Z | 2024-03-13T20:00:32Z | https://github.com/langchain-ai/langchain/issues/14017 | 2,016,092,448 | 14,017 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
When I call Llam2-70b-chat model using conversation chain, I got very short response from the model. But if I use LLMChain to call the model, it can give me a long response. So, is there any thing cut off or limit the response length in conversation chain? The model parameter about max_new_tokens I set as 4096, so I don't think that is caused by the model.
<img width="1589" alt="截屏2023-11-29 15 47 33" src="https://github.com/langchain-ai/langchain/assets/27841780/34eb62c5-ce1f-4296-9ff8-6298c0c031d6">
### Suggestion:
_No response_ | Issue: how to increase the conversation chain response length? | https://api.github.com/repos/langchain-ai/langchain/issues/14015/comments | 1 | 2023-11-29T07:49:03Z | 2024-03-13T20:04:32Z | https://github.com/langchain-ai/langchain/issues/14015 | 2,016,008,120 | 14,015 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi,
I am using Langchain and LlamaCpp to load my models.
I have set "`mirostat`" and "`repitition_penalty`" in my model params and recently I am getting the following UserWarning:
```
UserWarning: WARNING! repetition_penalty is not default parameter.
repetition_penalty was transferred to model_kwargs.
Please confirm that repetition_penalty is what you intended.
```
and
```
UserWarning: WARNING! mirostat is not default parameter.
mirostat was transferred to model_kwargs.
Please confirm that mirostat is what you intended.
```
Here is my code:
```
import box
import yaml
from langchain.llms import LlamaCpp
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
def build_llm(model_path):
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
n_gpu_layers = 1 # Metal set to 1 is enough.
n_batch = 512 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.
if model_path == "/Users/mweissenba001/Documents/llama2/llama.cpp/models/7B/ggml-model-q4_0.bin":
context_size = 4000
if model_path == "/Users/mweissenba001/Documents/rag_example/Modelle/llama-2-13b-german-assistant-v2.Q5_K_M.gguf":
context_size = 4000
else:
context_size = 7000
llm = LlamaCpp(
max_tokens =cfg.MAX_TOKENS,
model_path=model_path,
temperature=cfg.TEMPERATURE,
f16_kv=True,
n_ctx=context_size, # 8k aber mann muss Platz lassen für Instruction, History etc.
n_gpu_layers=n_gpu_layers,
n_batch=n_batch,
callback_manager=callback_manager,
verbose=True, # Verbose is required to pass to the callback manager
top_p=0.75,
top_k=40,
repetition_penalty=1.1,
mirostat = 2,
)
return llm
llm = build_llm(model_path)
```
My current Langchain Version is:
`langchain-0.0.339`
Upgrading to `langchain-0.0.341` didn't help.
So what do I have to do to prevent the warnings? Where in model_kwargs to I have to set mirostat and repitition_penalty?
### Suggestion:
_No response_ | UserWarning: WARNING! repetition_penalty is not default parameter. | https://api.github.com/repos/langchain-ai/langchain/issues/14014/comments | 1 | 2023-11-29T07:43:04Z | 2024-03-13T20:03:38Z | https://github.com/langchain-ai/langchain/issues/14014 | 2,016,000,077 | 14,014 |
[
"hwchase17",
"langchain"
] | ### Feature request
[Rephrase and Respond: Let Large Language Models Ask Better Questions for Themselves](https://arxiv.org/abs/2311.04205)
",,,RaR is complementary to CoT and can be combined with CoT to achieve even better performance,,,"
- interesting prompt technique.
Could be implemented with LangChain.
### Motivation
to make LangChain more powerful?
### Your contribution
I can help with documentation | `Rephrase and Respond` | https://api.github.com/repos/langchain-ai/langchain/issues/14003/comments | 1 | 2023-11-29T03:12:45Z | 2024-03-13T20:00:34Z | https://github.com/langchain-ai/langchain/issues/14003 | 2,015,715,132 | 14,003 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Streamlit tutorial suggests
```
from langchain.memory import ConversationBufferMemory
from langchain.memory.chat_message_histories import StreamlitChatMessageHistory
# Optionally, specify your own session_state key for storing messages
msgs = StreamlitChatMessageHistory(key="special_app_key")
memory = ConversationBufferMemory(memory_key="history", chat_memory=msgs)
if len(msgs.messages) == 0:
msgs.add_ai_message("How can I help you?")
```
but in version 0.340, definition of ConversationBufferMemory only includes parameter memory_key but not chat_history.
I have noticed my model outputs being less accurate for the same script when streamlit is incorporated and I suspect this is the issue.
### Idea or request for content:
Please clarify how to best implement Streamlit with chat history. | DOC: streamlit memory parameters - memory_key and chat_history | https://api.github.com/repos/langchain-ai/langchain/issues/13995/comments | 2 | 2023-11-29T00:14:13Z | 2024-04-23T16:55:13Z | https://github.com/langchain-ai/langchain/issues/13995 | 2,015,551,448 | 13,995 |
[
"hwchase17",
"langchain"
] | ### System Info
Code
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, return_sql=False, use_query_checker=True, prompt=prompt_template)
db_chain.run("What are some recently added dockets and their title?")
Verbose
> Entering new SQLDatabaseChain chain...
What are some recently added dockets and their title?
SQLQuery:SELECT id, title, modifyDate FROM docket ORDER BY modifyDate DESC LIMIT 5;
SQLResult: [('CMS-2023-0184', 'CY 2024 Inpatient Hospital Deductible and Hospital and Extended Care Services Coinsurance Amounts. CMS-8083-N', datetime.datetime(2023, 11, 1, 15, 34, 1)), ('CMS-2023-0183', '(CMS-10143) State Data for the Medicare Modernization Act (MMA)', datetime.datetime(2023, 11, 1, 10, 34, 24)), ('CMS-2023-0181', 'CHIP State Plan Eligibility (CMS-10398 #17)', datetime.datetime(2023, 11, 1, 10, 25, 35)), ('CMS-2023-0182', '(CMS-10434 #77) Medicaid and Continuous Eligibility for Children', datetime.datetime(2023, 11, 1, 10, 24, 56)), ('CMS-2023-0180', 'Virtual Groups for Merit Based Incentive Payment System (MIPS) (CMS-10652)', datetime.datetime(2023, 10, 31, 13, 8, 24))]
Answer:SELECT id, title, modifyDate FROM docket ORDER BY modifyDate DESC LIMIT 5;
> Finished chain.
Output:
'SELECT id, title, modifyDate FROM docket ORDER BY modifyDate DESC LIMIT 5;'
The chain is not returning the SQLResult in the chat format even though the query is executed correctly.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
prompt_suffix = """
If asked for recent dockets, give 5 most recent ones.
Make sure the table name is in the database.
Table name: docket,
Use only these columns when selecting: id, title, modifyDate
"""
prompt_template = PromptTemplate.from_template(prompt_suffix)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, return_sql=False, use_query_checker=True, prompt=prompt_template)
db_chain.run("What are some recently added dockets and their title?")
### Expected behavior
Example output:
The recently added dockets with their dates are:\n\n* CMS-2023-0181 - CY 2024 Inpatient Hospital Deductible and Hospital and Extended Care Services Coinsurance Amounts (November 1st, 2023' | SQLDatabaseChain returning Question and SQL Query instead of answer | https://api.github.com/repos/langchain-ai/langchain/issues/13994/comments | 4 | 2023-11-28T23:56:36Z | 2024-06-24T12:26:33Z | https://github.com/langchain-ai/langchain/issues/13994 | 2,015,530,321 | 13,994 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi there,
Before I was using local pickle files as the source of storage of my PDFs and chat history. And now I am moving into next step where I want to using Pinecone as my vector database to store these. I have made couple changes and they are not working and giving my error message. Especially for the part where I check if the embeddings are already stored in Pinecone and only create new embeddings for new files.
Here is my code:
## Imports
import streamlit as st
import os
from apikey import apikey
import pickle
from PyPDF2 import PdfReader
# Streamlit - user interface
from streamlit_extras.add_vertical_space import add_vertical_space
# Langchain
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.llms import OpenAI
from langchain.chains.question_answering import load_qa_chain
from langchain.callbacks import get_openai_callback
from langchain.chat_models.openai import ChatOpenAI
from langchain.schema import (SystemMessage, HumanMessage, AIMessage)
# Pinecone
from langchain.vectorstores import Pinecone
import pinecone
os.environ['OPENAI_API_KEY'] = apikey
## User Interface
# Side Bar
with st.sidebar:
st.title('🚀 Zi-GPT Version 2.0')
st.markdown('''
## About
This app is an LLM-powered chatbot built using:
- [Streamlit](https://streamlit.io/)
- [LangChain](https://python.langchain.com/)
- [OpenAI](https://platform.openai.com/docs/models) LLM model
''')
add_vertical_space(5)
st.write('Made with ❤️ by Zi')
# Main Page
def main():
st.header("Zi's PDF Helper: Chat with PDF")
# upload a PDF file
pdf = st.file_uploader("Please upload your PDF here", type='pdf')
# st.write(pdf)
# read PDF
if pdf is not None:
pdf_reader = PdfReader(pdf)
# split document into chunks
# also can use text split: good for PDFs that do not contains charts and visuals
sections = []
for page in pdf_reader.pages:
# Split the page text by paragraphs (assuming two newlines indicate a new paragraph)
page_sections = page.extract_text().split('\n\n')
sections.extend(page_sections)
chunks = sections
# st.write(chunks)
## embeddings
# Set up Pinecone
pinecone.init(api_key='d8d78cba-fbf1-42c6-a761-9e89a5ed24eb', environment='gcp-starter')
index_name = 'langchainresearch'
if index_name not in pinecone.list_indexes():
pinecone.create_index(index_name, dimension=1536, metric="cosine") # Adjust the dimension as per your embeddings
index = pinecone.Index(index_name)
file_name = pdf.name[:-4]
# Check if embeddings are already stored in Pinecone
if index.exists(id=file_name):
# Fetch embeddings from Pinecone
VectorStore = index.fetch(ids=[file_name])[file_name]
st.write('Embeddings Loaded from Pinecone')
else:
# Compute embeddings
embeddings = OpenAIEmbeddings(model="text-embedding-ada-002")
VectorStore = FAISS.from_texts(chunks, embedding=embeddings)
# Store embeddings in Pinecone
vectors = VectorStore.get_all_vectors()
index.upsert(vectors={(file_name, vectors)})
st.write('Embeddings Computation Completed and Stored in Pinecone')
# Create chat history
# Pinecone Setup for Chat History
chat_history_index_name = 'chat_history'
if chat_history_index_name not in pinecone.list_indexes():
pinecone.create_index(chat_history_index_name, dimension=1) # Dimension is 1 as we're not storing vectors here
chat_history_index = pinecone.Index(chat_history_index_name)
# Create or Load Chat History from Pinecone
if pdf:
# Check if chat history exists in Pinecone
if chat_history_index.exists(id=pdf.name):
# Fetch chat history from Pinecone
chat_history = chat_history_index.fetch(ids=[pdf.name])[pdf.name]
st.write('Chat History Loaded from Pinecone')
else:
# Initialize empty chat history
chat_history = []
# Initialize chat_history in session_state if not present
if 'chat_history' not in st.session_state:
st.session_state.chat_history = []
# Check if 'prompt' is in session state
if 'last_input' not in st.session_state:
st.session_state.last_input = ''
# User Input
current_prompt = st.session_state.get('user_input', '')
prompt_placeholder = st.empty()
prompt = prompt_placeholder.text_area("Ask questions about your PDF:", value=current_prompt, placeholder="Send a message", key="user_input")
submit_button = st.button("Submit")
if submit_button and prompt:
# Update the last input in session state
st.session_state.last_input = prompt
docs = VectorStore.similarity_search(query=prompt, k=3)
#llm = OpenAI(temperature=0.9, model_name='gpt-3.5-turbo')
chat = ChatOpenAI(model='gpt-4', temperature=0.7, max_tokens=3000)
message = [
SystemMessage(content="You are a helpful assistant"),
HumanMessage(content=prompt)
]
chain = load_qa_chain(llm=chat, chain_type="stuff")
with get_openai_callback() as cb:
response = chain.run(input_documents=docs, question=message)
print(cb)
# st.write(response)
# st.write(docs)
# Process the response using AIMessage schema
# ai_message = AIMessage(content="AI message content")
# ai_message.content = response.generations[0].message.content
# Add to chat history
chat_entry = {
"user_message": prompt,
"bot_response": response
}
# Save chat history
# Generate a unique ID for the chat entry, e.g., using a timestamp or a UUID
chat_entry_id = generate_unique_id()
pinecone_upsert(chat_history_index, {chat_entry_id: chat_entry})
# Clear the input after processing
prompt_placeholder.text_area("Ask questions about your PDF:", value='', placeholder="Send a message", key="pdf_prompt")
# Display the entire chat
chat_history = pinecone_query(chat_history_index, query_params)
chat_content = ""
for entry in chat_history:
user_msg = entry["user_message"]
bot_resp = entry["bot_response"]
chat_content += f"<div style='background-color: #222222; color: white; padding: 10px;'>**You:** {user_msg}</div>"
chat_content += f"<div style='background-color: #333333; color: white; padding: 10px;'>**Zi GPT:** {bot_resp}</div>"
st.markdown(chat_content, unsafe_allow_html=True)
if __name__ == '__main__':
main()
### Suggestion:
_No response_ | Issue: Introduce Pinecone into my PDF reader LLM | https://api.github.com/repos/langchain-ai/langchain/issues/13987/comments | 2 | 2023-11-28T21:48:49Z | 2024-01-24T14:59:06Z | https://github.com/langchain-ai/langchain/issues/13987 | 2,015,378,058 | 13,987 |
[
"hwchase17",
"langchain"
] | I was working with a sqlite DB that I created for a large dataset (~150k rows).
Code snippets:
`db = SQLDatabase.from_uri("sqlite:///MLdata.sqlite")`
`SQLITE_PROMPT_TEXT = '''You are a SQLite expert. Given an input question, first create a
syntactically correct SQLite query to run, then look at the results of the query and return
the answer to the input question.
Unless the user specifies in the question a specific number of examples to obtain, query for
at most {top_k} results using the LIMIT clause as per SQLite. You can order the results to
return the most informative data in the database.
Never query for all columns from a table. You must query only the columns that are needed to
answer the question. Wrap each column name in double quotes (") to denote them as delimited
identifiers.
Pay attention to use only the column names you can see in the tables below. Be careful to not
query for columns that do not exist. Also, pay attention to which column is in which table.
Use the following format:
Question: Question here
SQLQuery: SQL Query to run
SQLResult: Result of the SQLQuery
Answer: Final answer here
Only use the following tables:
{table_info}
Question: {input}'''`
`SQLITE_PROMPT = PromptTemplate(input_variables=['input', 'table_info', 'top_k'], template=SQLITE_PROMPT_TEXT)
sql_chain = SQLDatabaseChain(llm=local_llm, database=db, prompt=SQLITE_PROMPT, return_direct=False, return_intermediate_steps=False, verbose=False)
res=sql_chain("How many rows is in this db?")`
Response: 'There are 142321 rows in the input_table of this db.'
Second query
`res=sql_chain("Count rows with 'Abdominal pain', VAX_TYPE='COVID19', SEX= 'F' and HOSPITAL= 'Y' is in the input_table of this db")`
Response: 'There are 115 rows in the input_table where Abdominal pain is present, VAX_TYPE is COVID19, Sex is Female, and Hospital is Yes.'
Third query I was trying to find the patient ID only instead of the count. But I am not able to get the patient ID.
`res=sql_chain("What is the VAERS_ID with 'Abdominal pain', VAX_TYPE='COVID19', SEX= 'F' and HOSPITAL= 'Y' in this db. ")`
But the output generated is same as my second query.
Seems like the counting is working fine but nothing more.
Can anyone help me in displaying table like output from sqlDbchain via langchain and llama2? | Using langchain and LLaMA2 to QA with a large SQL DB | https://api.github.com/repos/langchain-ai/langchain/issues/13977/comments | 2 | 2023-11-28T18:11:46Z | 2024-03-13T19:55:37Z | https://github.com/langchain-ai/langchain/issues/13977 | 2,015,022,272 | 13,977 |
[
"hwchase17",
"langchain"
] | ### Feature request
NVIDIA TensorRT is an open-source SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications. I propose that a connector be added to langchain allowing users to use TensorRT with minimal configuration.
### Motivation
There are several implementations of this connector floating around the internet. Lots of folks seem to want this. Rather than have everyone need to add that connector manually it seems to make sense to natively include it in Langchain.
### Your contribution
I am happy to open the PR for this and will do so shortly. | Nvidia TensorRT LLM Connector | https://api.github.com/repos/langchain-ai/langchain/issues/13975/comments | 4 | 2023-11-28T16:24:31Z | 2024-03-27T16:07:47Z | https://github.com/langchain-ai/langchain/issues/13975 | 2,014,825,307 | 13,975 |
[
"hwchase17",
"langchain"
] | ### Feature request
Currently, the extraction chain only supports the extraction of an Array of object.
For example
```python
from typing import Optional
from langchain.chains import create_extraction_chain_pydantic
from langchain.pydantic_v1 import BaseModel
# Pydantic data class
class Properties(BaseModel):
person_name: str
person_height: int
person_hair_color: str
dog_breed: Optional[str]
dog_name: Optional[str]
# Extraction
chain = create_extraction_chain_pydantic(pydantic_schema=Properties, llm=llm)
# Run
inp = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde."""
chain.run(inp)
# Results in
#
# [Properties(person_name='Alex', person_height=5, person_hair_color='blonde', dog_breed=None, dog_name=None),
# Properties(person_name='Claudia', person_height=6, person_hair_color='brunette', dog_breed=None, dog_name=None)]
#
````
There is currently no option available to just get one `Properties` object.
It would be nice if you could define at the beginning if you are interested in one object or an array of objects
For example
```` python
chain = create_extraction_chain_pydantic(pydantic_schema=List[Properties], llm=llm) -> Array of Properties
chain = create_extraction_chain_pydantic(pydantic_schema=Properties] llm=llm) -> Just one Propertie
`````
### Motivation
It would just make life easier when you knew that you were only dealing with one object.
It might also improve the response and prevent wrong or incomplete responses.
### Your contribution
I would create and submit a PR that contains this feature. | Singel object extraction | https://api.github.com/repos/langchain-ai/langchain/issues/13971/comments | 2 | 2023-11-28T14:29:08Z | 2024-03-08T16:40:14Z | https://github.com/langchain-ai/langchain/issues/13971 | 2,014,572,602 | 13,971 |
[
"hwchase17",
"langchain"
] | ### System Info
hi!
im getting this error 👍
`AttributeError Traceback (most recent call last)
File <command-3819272873890469>, line 7
1 # gpt-3.5-turbo-0613
2 # gpt-3.5-turbo-1106
3 # gpt-4
4 # gpt-4-1106-preview
5 llm_model = "gpt-3.5-turbo-1106"
----> 7 llm = ChatOpenAI(
8 temperature=0,
9 model=llm_model
10 )
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain/load/serializable.py:97, in Serializable.__init__(self, **kwargs)
96 def __init__(self, **kwargs: Any) -> None:
---> 97 super().__init__(**kwargs)
98 self._lc_kwargs = kwargs
File /databricks/python/lib/python3.10/site-packages/pydantic/main.py:339, in pydantic.main.BaseModel.__init__()
File /databricks/python/lib/python3.10/site-packages/pydantic/main.py:1102, in pydantic.main.validate_model()
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/langchain/chat_models/openai.py:291, in ChatOpenAI.validate_environment(cls, values)
284 values["openai_proxy"] = get_from_dict_or_env(
285 values,
286 "openai_proxy",
287 "OPENAI_PROXY",
288 default="",
289 )
290 try:
--> 291 import openai
293 except ImportError:
294 raise ImportError(
295 "Could not import openai python package. "
296 "Please install it with `pip install openai`."
297 )
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/openai/__init__.py:11
9 from ._types import NoneType, Transport, ProxiesTypes
10 from ._utils import file_from_path
---> 11 from ._client import (
12 Client,
13 OpenAI,
14 Stream,
15 Timeout,
16 Transport,
17 AsyncClient,
18 AsyncOpenAI,
19 AsyncStream,
20 RequestOptions,
21 )
22 from ._version import __title__, __version__
23 from ._exceptions import (
24 APIError,
25 OpenAIError,
(...)
37 APIResponseValidationError,
38 )
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/openai/_client.py:12
8 from typing_extensions import override
10 import httpx
---> 12 from . import resources, _exceptions
13 from ._qs import Querystring
14 from ._types import (
15 NOT_GIVEN,
16 Omit,
(...)
21 RequestOptions,
22 )
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/openai/resources/__init__.py:6
4 from .chat import Chat, AsyncChat, ChatWithRawResponse, AsyncChatWithRawResponse
5 from .audio import Audio, AsyncAudio, AudioWithRawResponse, AsyncAudioWithRawResponse
----> 6 from .edits import Edits, AsyncEdits, EditsWithRawResponse, AsyncEditsWithRawResponse
7 from .files import Files, AsyncFiles, FilesWithRawResponse, AsyncFilesWithRawResponse
8 from .images import (
9 Images,
10 AsyncImages,
11 ImagesWithRawResponse,
12 AsyncImagesWithRawResponse,
13 )
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/openai/resources/edits.py:24
19 from .._client import OpenAI, AsyncOpenAI
21 __all__ = ["Edits", "AsyncEdits"]
---> 24 class Edits(SyncAPIResource):
25 with_raw_response: EditsWithRawResponse
27 def __init__(self, client: OpenAI) -> None:
File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.10/site-packages/openai/resources/edits.py:31, in Edits()
28 super().__init__(client)
29 self.with_raw_response = EditsWithRawResponse(self)
---> 31 @typing_extensions.deprecated(
32 "The Edits API is deprecated; please use Chat Completions instead.\n\nhttps://openai.com/blog/gpt-4-api-general-availability#deprecation-of-the-edits-api\n"
33 )
34 def create(
35 self,
36 *,
37 instruction: str,
38 model: Union[str, Literal["text-davinci-edit-001", "code-davinci-edit-001"]],
39 input: Optional[str] | NotGiven = NOT_GIVEN,
40 n: Optional[int] | NotGiven = NOT_GIVEN,
41 temperature: Optional[float] | NotGiven = NOT_GIVEN,
42 top_p: Optional[float] | NotGiven = NOT_GIVEN,
43 # Use the following arguments if you need to pass additional parameters to the API that aren't available via kwargs.
44 # The extra values given here take precedence over values defined on the client or passed to this method.
45 extra_headers: Headers | None = None,
46 extra_query: Query | None = None,
47 extra_body: Body | None = None,
48 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
49 ) -> Edit:
50 """
51 Creates a new edit for the provided input, instruction, and parameters.
52
(...)
81 timeout: Override the client-level default timeout for this request, in seconds
82 """
83 return self._post(
84 "/edits",
85 body=maybe_transform(
(...)
99 cast_to=Edit,
100 )
AttributeError: module 'typing_extensions' has no attribute 'deprecated'`
im using ChatOpenAI with the follwing libs:

and python version is
Python 3.10.12
until some days these worked great. and i didnt touch the code.
so, whats happening here?
thanks
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
llm_model = "gpt-4-1106-preview"
llm = ChatOpenAI(
temperature=0,
model=llm_model
)
-- using databricks, with langchain and openai
### Expected behavior
just an ok and chatopenai working | AttributeError: module 'typing_extensions' has no attribute 'deprecated' when using ChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/13970/comments | 1 | 2023-11-28T14:15:08Z | 2024-03-13T20:01:42Z | https://github.com/langchain-ai/langchain/issues/13970 | 2,014,540,587 | 13,970 |
[
"hwchase17",
"langchain"
] | ### System Info
Hi, i'm obtaining this error trace when performing a basic `create_extraction_chain` example.I'm using CTransformers (llama2 type), more specifically, this model here: https://huggingface.co/clibrain/Llama-2-13b-ft-instruct-es-gguf
Python version: 3.11
Environment:
```
accelerate 0.24.1
aiofiles 22.1.0
aiohttp 3.9.0
aiosignal 1.3.1
aiosqlite 0.19.0
alembic 1.9.4
anyio 3.7.1
appnope 0.1.3
argilla 1.19.0
argon2-cffi 23.1.0
argon2-cffi-bindings 21.2.0
arrow 1.3.0
asttokens 2.4.1
async-lru 2.0.4
attrs 23.1.0
Babel 2.13.1
backoff 1.11.1
bcrypt 4.0.1
beautifulsoup4 4.12.2
bleach 6.1.0
Brotli 1.1.0
brotli-asgi 1.2.0
certifi 2023.11.17
cffi 1.16.0
charset-normalizer 3.3.2
click 8.1.7
coloredlogs 15.0.1
comm 0.2.0
commonmark 0.9.1
cryptography 41.0.5
ctransformers 0.2.27
dataclasses-json 0.6.3
datasets 2.15.0
debugpy 1.8.0
decorator 5.1.1
defusedxml 0.7.1
Deprecated 1.2.14
dill 0.3.7
diskcache 5.6.3
ecdsa 0.18.0
elastic-transport 8.10.0
elasticsearch8 8.7.0
executing 2.0.1
fastapi 0.104.1
fastjsonschema 2.19.0
filelock 3.13.1
fqdn 1.5.1
frozenlist 1.4.0
fsspec 2023.10.0
greenlet 3.0.1
h11 0.14.0
httpcore 0.16.3
httptools 0.6.1
httpx 0.23.3
huggingface-hub 0.19.4
humanfriendly 10.0
idna 3.4
ipykernel 6.27.0
ipython 8.17.2
isoduration 20.11.0
jedi 0.19.1
Jinja2 3.1.2
joblib 1.3.2
json5 0.9.14
jsonpatch 1.33
jsonpointer 2.4
jsonschema 4.20.0
jsonschema-specifications 2023.11.1
jupyter_client 8.6.0
jupyter_core 5.5.0
jupyter-events 0.9.0
jupyter-lsp 2.2.0
jupyter_server 2.11.0
jupyter_server_terminals 0.4.4
jupyterlab 4.0.9
jupyterlab-pygments 0.2.2
jupyterlab_server 2.25.2
langchain 0.0.341
langchain-core 0.0.6
langsmith 0.0.67
llama_cpp_python 0.2.20
Mako 1.3.0
markdown-it-py 3.0.0
MarkupSafe 2.1.3
marshmallow 3.20.1
matplotlib-inline 0.1.6
mdurl 0.1.2
mistune 3.0.2
monotonic 1.6
mpmath 1.3.0
multidict 6.0.4
multiprocess 0.70.15
mypy-extensions 1.0.0
nbclient 0.9.0
nbconvert 7.11.0
nbformat 5.9.2
nest-asyncio 1.5.8
networkx 3.2.1
nltk 3.8.1
notebook_shim 0.2.3
numpy 1.23.5
opensearch-py 2.0.1
optimum 1.14.1
overrides 7.4.0
packaging 23.2
pandas 1.5.3
pandocfilters 1.5.0
parso 0.8.3
passlib 1.7.4
pexpect 4.8.0
Pillow 10.1.0
pip 23.3.1
platformdirs 4.0.0
prometheus-client 0.19.0
prompt-toolkit 3.0.41
protobuf 4.25.1
psutil 5.9.6
ptyprocess 0.7.0
pure-eval 0.2.2
py-cpuinfo 9.0.0
pyarrow 14.0.1
pyarrow-hotfix 0.6
pyasn1 0.5.1
pycparser 2.21
pydantic 1.10.13
Pygments 2.17.2
python-dateutil 2.8.2
python-dotenv 1.0.0
python-jose 3.3.0
python-json-logger 2.0.7
python-multipart 0.0.6
pytz 2023.3.post1
PyYAML 6.0.1
pyzmq 25.1.1
referencing 0.31.0
regex 2023.10.3
requests 2.31.0
rfc3339-validator 0.1.4
rfc3986 1.5.0
rfc3986-validator 0.1.1
rich 13.0.1
rpds-py 0.13.1
rsa 4.9
safetensors 0.4.0
scikit-learn 1.3.2
scipy 1.11.4
segment-analytics-python 2.2.0
Send2Trash 1.8.2
sentence-transformers 2.2.2
sentencepiece 0.1.99
setuptools 68.2.2
six 1.16.0
smart-open 6.4.0
sniffio 1.3.0
soupsieve 2.5
SQLAlchemy 2.0.23
stack-data 0.6.3
starlette 0.27.0
sympy 1.12
tenacity 8.2.3
terminado 0.18.0
threadpoolctl 3.2.0
tinycss2 1.2.1
tokenizers 0.15.0
torch 2.1.1
torchvision 0.16.1
tornado 6.3.3
tqdm 4.66.1
traitlets 5.13.0
transformers 4.35.2
typer 0.9.0
types-python-dateutil 2.8.19.14
typing_extensions 4.8.0
typing-inspect 0.9.0
uri-template 1.3.0
urllib3 1.26.18
uvicorn 0.20.0
uvloop 0.19.0
watchfiles 0.21.0
wcwidth 0.2.12
webcolors 1.13
webencodings 0.5.1
websocket-client 1.6.4
websockets 12.0
wrapt 1.14.1
xxhash 3.4.1
yarl 1.9.3
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Hi, i'm obtaining this error trace when performing a basic `create_extraction_chain` example.I'm using CTransformers (llama2 type), more specifically, this model here: https://huggingface.co/clibrain/Llama-2-13b-ft-instruct-es-gguf
```
OutputParserException Traceback (most recent call last)
Cell In[9], line 17
12 chain = create_extraction_chain(schema, llm)
14 inp = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.
15 Willow is a German Shepherd that likes to play with other dogs and can always be found playing with Milo, a border collie that lives close by."""
---> 17 chain.run(inp)
File ~/Desktop/green-jobs/genv/lib/python3.11/site-packages/langchain/chains/base.py:507, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
505 if len(args) != 1:
506 raise ValueError("`run` supports only one positional argument.")
--> 507 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
508 _output_key
509 ]
511 if kwargs and not args:
512 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
513 _output_key
514 ]
File ~/Desktop/green-jobs/genv/lib/python3.11/site-packages/langchain/chains/base.py:312, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
310 except BaseException as e:
311 run_manager.on_chain_error(e)
--> 312 raise e
313 run_manager.on_chain_end(outputs)
314 final_outputs: Dict[str, Any] = self.prep_outputs(
315 inputs, outputs, return_only_outputs
316 )
File ~/Desktop/green-jobs/genv/lib/python3.11/site-packages/langchain/chains/base.py:306, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
299 run_manager = callback_manager.on_chain_start(
300 dumpd(self),
301 inputs,
302 name=run_name,
303 )
304 try:
305 outputs = (
--> 306 self._call(inputs, run_manager=run_manager)
307 if new_arg_supported
308 else self._call(inputs)
309 )
310 except BaseException as e:
311 run_manager.on_chain_error(e)
File ~/Desktop/green-jobs/genv/lib/python3.11/site-packages/langchain/chains/llm.py:104, in LLMChain._call(self, inputs, run_manager)
98 def _call(
99 self,
100 inputs: Dict[str, Any],
101 run_manager: Optional[CallbackManagerForChainRun] = None,
102 ) -> Dict[str, str]:
103 response = self.generate([inputs], run_manager=run_manager)
--> 104 return self.create_outputs(response)[0]
File ~/Desktop/green-jobs/genv/lib/python3.11/site-packages/langchain/chains/llm.py:258, in LLMChain.create_outputs(self, llm_result)
256 def create_outputs(self, llm_result: LLMResult) -> List[Dict[str, Any]]:
257 """Create outputs from response."""
--> 258 result = [
259 # Get the text of the top generated string.
260 {
261 self.output_key: self.output_parser.parse_result(generation),
262 "full_generation": generation,
263 }
264 for generation in llm_result.generations
265 ]
266 if self.return_final_only:
267 result = [{self.output_key: r[self.output_key]} for r in result]
File ~/Desktop/green-jobs/genv/lib/python3.11/site-packages/langchain/chains/llm.py:261, in <listcomp>(.0)
256 def create_outputs(self, llm_result: LLMResult) -> List[Dict[str, Any]]:
257 """Create outputs from response."""
258 result = [
259 # Get the text of the top generated string.
260 {
--> 261 self.output_key: self.output_parser.parse_result(generation),
262 "full_generation": generation,
263 }
264 for generation in llm_result.generations
265 ]
266 if self.return_final_only:
267 result = [{self.output_key: r[self.output_key]} for r in result]
File ~/Desktop/green-jobs/genv/lib/python3.11/site-packages/langchain/output_parsers/openai_functions.py:130, in JsonKeyOutputFunctionsParser.parse_result(self, result, partial)
129 def parse_result(self, result: List[Generation], *, partial: bool = False) -> Any:
--> 130 res = super().parse_result(result, partial=partial)
131 if partial and res is None:
132 return None
File ~/Desktop/green-jobs/genv/lib/python3.11/site-packages/langchain/output_parsers/openai_functions.py:68, in JsonOutputFunctionsParser.parse_result(self, result, partial)
66 generation = result[0]
67 if not isinstance(generation, ChatGeneration):
---> 68 raise OutputParserException(
69 "This output parser can only be used with a chat generation."
70 )
71 message = generation.message
72 try:
OutputParserException: This output parser can only be used with a chat generation.
```
Im using the following example:
```
schema = {
"properties": {
"person_name": {"type": "string"},
"person_height": {"type": "integer"},
"person_hair_color": {"type": "string"},
"dog_name": {"type": "string"},
"dog_breed": {"type": "string"},
},
"required": [],
}
chain = create_extraction_chain(schema, llm)
inp = """Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.
Willow is a German Shepherd that likes to play with other dogs and can always be found playing with Milo, a border collie that lives close by."""
chain.run(inp)
```
### Expected behavior
I expect not to get an error when executing the code | OutputParserException in extraction use case | https://api.github.com/repos/langchain-ai/langchain/issues/13969/comments | 3 | 2023-11-28T14:13:25Z | 2024-03-08T16:47:51Z | https://github.com/langchain-ai/langchain/issues/13969 | 2,014,537,056 | 13,969 |
[
"hwchase17",
"langchain"
] | ### System Info
**python:** 3.11.6
**langchain:** 0.0.335
**openai:** 0.28.1
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I tried to create custom tools for Agent type ZERO_SHOT_REACT_DESCRIPTION but after executing the first chain, it keeps getting error Caused by NewConnectionError on the next chain. The tools work fine because I tested them with Agent type OPENAI.FUNTIONS
My first tool
```
def get_thing(u: str) -> str:
c, l = get_thing_ori(u, True)
r = c + '\n' + l
return r
class GetThingCheckInput(BaseModel):
u: str = Field(..., description='thing to get')
def get_thing_tool():
GetThingTool = StructuredTool.from_function(
name="get_thing",
description="Useful to get thing",
func=get_thing,
args_schema=GetThingCheckInput
)
return GetThingTool
```
My second tool
```
def get_S(t: str, l=str, c=str) -> str:
u_s, d_s = get_so()
result = ''
for u, d in zip(u_s, d_s):
result += f'U: {u}\nD: {d}\n\n'
return result
def parsing_get_S(string: str):
t, l, c = string.split(', ')
return (get_S(t, l, c))
class ParsingGetSCheckInput(BaseModel):
string: str = Field(..., description='A string contain a data in format: "T, L, C"')
def get_parsing_get_S_tool():
ParsingGetSTool = StructuredTool.from_function(
name = 'parsing_get_S',
description="Useful to get S. Input is a comma-seperated list contain data in format: 'T, L, C'",
func=parsing_get_S,
args_schema=ParsingGetSCheckInput
)
return ParsingGetSTool
```
This is my main
```
if __name__ == '__main__':
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0, max_retries=5, timeout=100)
GetThingTool = get_thing_tool()
ParsingGetSTool = get_parsing_get_S_tool()
tools = [GetThingTool, ParsingGetSTool]
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("Who is Leo DiCaprio's girlfriend? What is her current age raised to the 0.43 power?")
```
I try to add both `os.environ["LANGCHAIN_TRACING"] = "true"` and `os.environ["LANGCHAIN_TRACING"] = "true"` but it not working.
Full error is:
```
ERROR:root:download error: 'https://www.dailymail.co.uk/tvshowbiz/article-12792715/Leonardo-DiCaprio-low-key-glamorous-girlfriend-Vittoria-Ceretti-family-London.html' HTTPConnectionPool(host="'https", port=80): Max retries exceeded with url: //[www.dailymail.co.uk/tvshowbiz/article-12792715/Leonardo-DiCaprio-low-key-glamorous-girlfriend-Vittoria-Ceretti-family-London.html](https://file+.vscode-resource.vscode-cdn.net/d%3A/Python%20Project/www.dailymail.co.uk/tvshowbiz/article-12792715/Leonardo-DiCaprio-low-key-glamorous-girlfriend-Vittoria-Ceretti-family-London.html)' (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0000018BEA481B90>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))
```
### Expected behavior
As I use Agent type OPENAI.FUNTIONS, it completely returns a full answer, so that means there is nothing wrong with the tools, i guess. It should work for agent type ZERO_SHOT_REACT_DESCRIPTION. The reason I use zero shot is because the thinking part works better in different languages than agent-type OPENAI.FUNTIONS. | Caused by NewConnectionError when using ZERO_SHOT_REACT_DESCRIPTION | https://api.github.com/repos/langchain-ai/langchain/issues/13968/comments | 2 | 2023-11-28T13:57:21Z | 2023-11-30T11:03:08Z | https://github.com/langchain-ai/langchain/issues/13968 | 2,014,504,640 | 13,968 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hello Team,
We are finding a way to pass the context, previous question and answer to **create_pandas_dataframe_agent**. Can u please help me understand how i can pass the context(previous question and answer) to the **create_pandas_dataframe_agent**. it will be helpful if you have any such example implementation
Thanks,
Akash
### Suggestion:
_No response_ | Issue: Passing context(previous question and answer) to the create_pandas_dataframe_agent function. | https://api.github.com/repos/langchain-ai/langchain/issues/13967/comments | 16 | 2023-11-28T13:30:17Z | 2024-03-18T16:06:56Z | https://github.com/langchain-ai/langchain/issues/13967 | 2,014,447,411 | 13,967 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
**Summary:**
The FAISS similarity search in LangChain is encountering difficulties when processing alpha-numeric queries that involve numeric integers. While the search performs well for queries like "What are resolutions to problems related to SAF," it exhibits suboptimal behavior when processing queries such as "Give me complete details of L2-resolution against ORA-14300," which involve alpha-numeric combinations.
Note that we have already successfully embedded and indexed the above documents that includes alpha numeric key as well
such as "ORA-14300".
**Expected Behavior:**
The FAISS similarity search should accurately and effectively retrieve relevant information for alpha-numeric queries, providing precise results even when numeric integers are included in the query.
**Current Behavior:**
The search is not functioning correctly when processing alpha-numeric queries with numeric integers. It fails to accurately identify and retrieve relevant documents, leading to a suboptimal user experience.


**Steps to Reproduce:**
Index CSV data containing both text and numerical values, and subsequently execute a query that includes an alphanumeric question.
**Additional Information:**
Environment: Langchain version (0.0.284)
**Impact:**
This issue affects the accuracy and reliability of the FAISS similarity search, particularly when handling alpha-numeric queries that include numeric integers. Users relying on LangChain for information retrieval may experience challenges when seeking relevant documents related to such queries.
**Priority:**
High
Are FAISS and Redis similarity searches capable of providing such high similarity search over the index? If not, please guide me on where I should turn to achieve better and more accurate results
Thank you for your attention to this matter. Feel free to request additional information if needed.
### Suggestion:
_No response_ | Why does FAISS similarity search not fetch data with respect to alphanumeric keys like ORA-14300? | https://api.github.com/repos/langchain-ai/langchain/issues/13964/comments | 1 | 2023-11-28T12:50:29Z | 2024-03-13T20:03:49Z | https://github.com/langchain-ai/langchain/issues/13964 | 2,014,369,520 | 13,964 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I create my vector db using the following code:
```
db = Chroma.from_documents(
chunked_documents,
embeddings,
persist_directory=db_path + '/' + db_type,
client_settings=chroma_settings,)
```
`chunked_documents` is a list of elements of type Document.
I have added metadata which is a simple numerical id: `{'id': 1}`
```
embeddings = HuggingFaceInstructEmbeddings(
model_name=args.embedding_model,
model_kwargs={"device": args.device},
)
```
```
CHROMA_SETTINGS = Settings(
anonymized_telemetry=False,
is_persistent=True,
)
```
What happens is that I run `db.similarity_search(query, k=3)` and for part of the answers, the metadata dict is empty. Has anyone encountered such an issue?
Just to point out, when I create the db using the `from_texts()` method where I add raw texts and metadata separately I do not encounter the issue and when running `db.similarity_search()` the returned answer, contains the respective metadata.
### Suggestion:
_No response_ | Issue: Chroma.from_documents does not save metadata properly | https://api.github.com/repos/langchain-ai/langchain/issues/13963/comments | 5 | 2023-11-28T12:16:56Z | 2024-05-02T16:04:54Z | https://github.com/langchain-ai/langchain/issues/13963 | 2,014,310,258 | 13,963 |
[
"hwchase17",
"langchain"
] | ### System Info
I am using the SelfQueryRetriever class to make questions about a set of documents. It seems that there is an issue with the use of the `enable_limit` argument
Versions:
Langchain Version: 0.0.340
openai 1.3.5
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
enable_limit = False:
```python
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=False
)
query = "I want indentify thousand repairs with xxxx"
result = retriever.invoke(query)
len(result)
#Result: 4
```
If change enable_limit = True:
```python
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True
)
query = "I want indentify thousand repairs with xxxx"
result = retriever.invoke(query)
len(result)
#Result: 1000
```
If enable_limit=True and change in query for "All":
```python
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True
)
query = "I want indentify All repairs with xxxx"
result = retriever.invoke(query)
len(result)
#Result: 4
```
### Expected behavior
The expected behaviour with "enable_limit = False" was to show more than 1000 documents.
As there is no defined limit, all documents were expected as a result. | Enable Limit False in Self Query Retriever doesn't have the expected behavior | https://api.github.com/repos/langchain-ai/langchain/issues/13961/comments | 1 | 2023-11-28T11:50:02Z | 2024-03-13T20:01:17Z | https://github.com/langchain-ai/langchain/issues/13961 | 2,014,263,985 | 13,961 |
[
"hwchase17",
"langchain"
] | ### System Info
Current LangChain master commit:
https://github.com/langchain-ai/langchain/commits/391f200
### Who can help?
@hwchase17 @baskaryan
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just run
```
from langchain.chat_models import BedrockChat
```
Stacktrace:
```
tests/e2e-tests/test_compatibility_matrix.py:10: in <module>
from langchain.chat_models import ChatOpenAI, AzureChatOpenAI, ChatVertexAI, BedrockChat
/tmp/venv/lib/python3.11/site-packages/langchain/chat_models/__init__.py:20: in <module>
from langchain.chat_models.anthropic import ChatAnthropic
/tmp/venv/lib/python3.11/site-packages/langchain/chat_models/anthropic.py:18: in <module>
from langchain.chat_models.base import (
/tmp/venv/lib/python3.11/site-packages/langchain/chat_models/base.py:1: in <module>
from langchain_core.language_models.chat_models import (
E ImportError: cannot import name 'agenerate_from_stream' from 'langchain_core.language_models.chat_models' (/tmp/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py)
```
### Expected behavior
To being able to import the required modules | ImportError: cannot import name 'agenerate_from_stream' from 'langchain_core.language_models.chat_models' | https://api.github.com/repos/langchain-ai/langchain/issues/13958/comments | 5 | 2023-11-28T09:40:11Z | 2023-11-29T13:33:23Z | https://github.com/langchain-ai/langchain/issues/13958 | 2,014,030,020 | 13,958 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I'm currently working on a project and have encountered an issue with the deletion functionality for Confluence
Spaces. I've implemented a function delete_embeddings that is supposed to delete embeddings based on the Confluence space key, but it doesn't seem to be working as expected.
Here's the relevant code snippet:
def delete_embeddings(file_path, persist_directory):
chroma_db = chromadb.PersistentClient(path=persist_directory)
collection = chroma_db.get_or_create_collection(name="langchain")
ids = collection.get(where={"source": file_path})['ids']
collection.delete(where={"source": file_path},ids=ids)
# chroma_db.delete_collection(name="langchain")
print("delete successfully")
And I'm calling this function as follows:
delete_embeddings(names, persist_directory)
I want to delete embeddings of Confluence Spaces when user request deletion confluence space.
### Suggestion:
_No response_ | Issue: Question about Deletion of Embeddings for Confluence Spaces | https://api.github.com/repos/langchain-ai/langchain/issues/13956/comments | 5 | 2023-11-28T08:26:59Z | 2024-03-13T19:58:26Z | https://github.com/langchain-ai/langchain/issues/13956 | 2,013,904,967 | 13,956 |
[
"hwchase17",
"langchain"
] | ### Feature request
PGVector in Langchain does not support advance metadata filtering such as "OR" clause.
For now, there is no way to perform filters such as:
```
{
"$or": [
{"uploaded_by": {"$eq": "USER1"}},
{"org": {"$eq": "ORG"}},
]
}
```
### Motivation
Our team is unable to use langchain with PGVector due to its lack of support for "OR" filter.
Having advanced metadata filtering like that in Pinecone/Qdrant would really help
https://docs.pinecone.io/docs/metadata-filtering
### Your contribution
For now, I see existing PR: https://github.com/langchain-ai/langchain/pull/12977
This could possibly solve the issue. | Advance metadata filtering for PGVector | https://api.github.com/repos/langchain-ai/langchain/issues/13955/comments | 4 | 2023-11-28T08:15:46Z | 2024-07-04T16:07:13Z | https://github.com/langchain-ai/langchain/issues/13955 | 2,013,887,941 | 13,955 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
I'm currently working on a project and have encountered an issue with the deletion functionality for Confluence Spaces. I've implemented a function `delete_embeddings` that is supposed to delete embeddings based on the Confluence space key, but it doesn't seem to be working as expected.
Here's the relevant code snippet:
```python
def delete_embeddings(file_path, persist_directory):
chroma_db = chromadb.PersistentClient(path=persist_directory)
collection = chroma_db.get_or_create_collection(name="langchain")
ids = collection.get(where={"source": file_path})['ids']
collection.delete(where={"source": file_path},ids=ids)
# chroma_db.delete_collection(name="langchain")
print("delete successfully")
And I'm calling this function as follows:
delete_embeddings(names, persist_directory)
### Suggestion:
_No response_ | Issue: Question about Deletion Functionality for Confluence Spaces | https://api.github.com/repos/langchain-ai/langchain/issues/13954/comments | 2 | 2023-11-28T06:57:30Z | 2024-03-13T20:01:58Z | https://github.com/langchain-ai/langchain/issues/13954 | 2,013,781,829 | 13,954 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I was hoping to use the Dropbox document loader for a large number of pdf and some docx documents, however I am not sure whether this loader supports these file types. I followed the instructions on https://python.langchain.com/docs/integrations/document_loaders/dropbox and installed the "unstructured[all-docs]" package but I keep getting the message that the loader skips these files.
> xxx.docx could not be decoded as text. Skipping.
> yyy.pdf could not be decoded as text. Skipping.
Does this loader only support .txt files? Is there an alternative? I see the Unstructured loader only works for individual files, is that the best alternative?
Many thanks!
### Idea or request for content:
File formats the loader supports needs to be clarified
unstructured package was given as a prerequisite for pdf files but I was getting missing package/method errors until I installed the "unstructured[all-docs]" package and still not able to load pdf files | DOC: Dropbox document loader functionality | https://api.github.com/repos/langchain-ai/langchain/issues/13952/comments | 7 | 2023-11-28T06:18:48Z | 2023-12-03T22:19:38Z | https://github.com/langchain-ai/langchain/issues/13952 | 2,013,737,662 | 13,952 |
[
"hwchase17",
"langchain"
] | ### System Info
To train data in Pinecone, I used the function.
_pinecone = Pinecone.from_documents(docs, self.embeddings, index_name=self.index_name)
all the parameters have values always:


But, this error occurs often like this:

@hwchase17 , I am looking forward to fixing this issue asap. Thank you.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
class PineConeIndexer:
def __init__(self):
self.embeddings = OpenAIEmbeddings()
# initialize pinecone
pinecone.init(
api_key=os.environ.get('PINECONE_API_KEY'),
environment=os.environ.get('PINECONE_ENV')
)
self.index_name = os.environ.get('PINECONE_INDEX_NAME')
def upsert_index_from_task(self, task):
try:
# get doc
....
_pinecone = Pinecone.from_documents(docs, self.embeddings, index_name=self.index_name)
return {"success": True, "error": None}
except Exception as e:
return {"success": False, "error": str(e)}
### Expected behavior
sometimes, this error occurs:
PineconeProtocolError
Failed to connect; did you specify the correct index name?
ProtocolError
('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))
| PineconeProtocolError: Failed to connect; did you specify the correct index name? | https://api.github.com/repos/langchain-ai/langchain/issues/13951/comments | 3 | 2023-11-28T05:49:35Z | 2024-06-08T16:07:35Z | https://github.com/langchain-ai/langchain/issues/13951 | 2,013,694,385 | 13,951 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
this is intput token (import pandas as pd
import json
from IPython.display import Markdown, display
from langchain.agents import create_csv_agent
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
import os
os.environ["OPENAI_API_KEY"] = ""
# Load the dataset
df = pd.read_csv('Loan Collections - Sheet1.csv')
# Function to convert date and time columns to float
def convert_date_time_columns_to_float(df):
for column in df.select_dtypes(include=['object']).columns:
# Check if the column name contains 'date' or 'time'
if 'date' in column.lower() or 'time' in column.lower():
try:
# Convert the column to datetime
df[column] = pd.to_datetime(df[column], errors='coerce')
# Convert datetime to numerical representation (e.g., days since a reference date)
reference_date = pd.to_datetime('1900-01-01')
df[column] = (df[column] - reference_date).dt.total_seconds() / (24 * 60 * 60)
except ValueError:
# Handle errors during conversion
print(f"Error converting column '{column}' to float.")
# Convert 'date' and 'time' columns to float
convert_date_time_columns_to_float(df)
# Extract unique values for each column
unique_values_per_column = {}
for column in df.select_dtypes(include=['object']).columns:
unique_values_per_column[column] = df[column].unique().tolist()
# Convert the dictionary to JSON
json_data_train = json.dumps(unique_values_per_column, indent=4)
testData_fname = "Sample Retail Stores Data.csv"
# Load the dataset
df2 = pd.read_csv(testData_fname)
convert_date_time_columns_to_float(df2)
# Extract unique values for each column
unique_values_per_column = {}
for column in df2.select_dtypes(include=['object']).columns:
unique_values_per_column[column] = df2[column].unique().tolist()
# Convert the dictionary to JSON
json_data_test = json.dumps(unique_values_per_column, indent=4)
# Define user's question
user_question = "Percentage share of State by Value?"
# Define the prompt template
prompt_template = f'''If the dataset has the following columns: {json_data_train}'''+''' Understand user questions with different column names and convert them to a JSON format.
Question might not even mentioned column name at all, it would probably mention value of the column. so it has to figure it out columnn name based on that value.
Example1:
User Question1: top zone in the year 2019 with Loan Amt between 10k and 20k and tenure > 12 excluding Texas region?
{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": [],
"trend": "null",
"to_start_date": "null",
"to_end_date": "null",
"growth": "null",
"variables_grpby": ["Zone"],
"filters": {},
"not_in": {"Region": ["Texas"]},
"num_filter": {
"gt": [
["Loan Tenure", 12],
["Loan Amount", 10000]
],
"lt": [
["Loan Amount", 20000]
]
},
"percent": "false",
"top": "1",
"bottom": "null"
}
Note the following in the above example
- The word "top" in the User Question made the "top" key have the value as "1". If "highest" is mentioned in the User Question, even then "top" would have the value as "1". If "top" is not mentioned or not implied in the User Question, then it takes on the value "null". Similarly for "bottom" key in the System Response.
- The word "zone" in the User Question refers to a column "Zone" in the dataset and since it is a non-numeric column and we have to group by that column, the system response has it as one of the values of the list of the key "variables_grpby"
- The key "start_date" and "end_date" Since it is mentioned 2019 in the User Question as the timeframe, the "start_date" assumes the beginning of the year 2019 and "end_date" assumes the end of the year 2019. If no date related words are mentioned in the question, "start_date" would be "null" and "end_date" would be "null".
- The key "time_stamp_col" in the System Response should mention the relevant time related column name from the dataset according to the question if the question mentions a time related word.
- The key "agg_columns" in the System Response is a list of columns to be aggregated which should mention the numeric column names on which the question wants us to aggregate on.
- The key "trend" in the System Response, "trend" is set to "null" since the user question doesn't imply any trend analysis . If the question were about trends over time, this key would contain information about the trend, such as "upward," "downward," or "null" if no trend is specified.
- The key "filters" An empty dictionary in this case, as there are no explicit filters mentioned in the user question. If the user asked to filter data based on certain conditions (e.g. excluding a specific region), this key would contain the relevant filters.
- The key "to_start_date" and "to_end_date" Both set to "null" in this example because the user question specifies a single timeframe (2019). If the question mentioned a range (e.g. "from January 2019 to March 2019"), these keys would capture the specified range.
- The key "growth" Set to "null" in this example as there is no mention of growth in the user question. If the user inquired about growth or change over time, this key would provide information about the type of growth (e.g."monthly","yearly"," "absolute") or be set to "null" if not applicable.
- The key "not_in" Contains information about exclusion criteria based on the user's question. In this example, it excludes the "Texas" region. If the user question doesn't involve exclusions, this key would be an empty dictionary.
- The key "num_filter" Specifies numerical filters based on conditions in the user question. In this example, it filters loans with a tenure greater than 12 and loan amounts between 10k and 20k. If the user question doesn't involve numerical filters, this key would be an empty dictionary.
- The key "percent" Set to "false" in this example as there is no mention of percentage in the user question. If the user inquired about percentages, this key would contain information about the use of percentages in the response.
Similarly, below are more examples of user questions and their corresponding expected System Responses.
Example 2:
User Question: What is the Highest Loan Amount and Loan Outstanding by RM Name James in January 2020
{
"start_date": "01-01-2020",
"end_date": "31-01-2020",
"time_stamp_col": "Due Date",
"agg_columns": ["Loan Amount", "Loan Outstanding"],
"trend": "null",
"to_start_date": "null",
"to_end_date": "null",
"growth": "null",
"variables_grpby": [],
"filters": {"RM Name": ["James"]},
"not_in": {},
"num_filter": {},
"percent": "false",
"top": "1",
"bottom": "null"
}
Example 3:
User Question: Which RM Name with respect to Region has the Highest Interest Outstanding and Principal Outstanding in the year 2019
{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Interest Outstanding", "Principal Outstanding"],
"trend": "null",
"to_start_date": "null",
"to_end_date": "null",
"growth": "null",
"variables_grpby": ["RM Name", "Region"],
"filters": {},
"not_in": {},
"num_filter": {},
"percent": "false",
"top": "1",
"bottom": "null"
}
Example 4:
User Question: Which Branch in North Carolina with respect to Cibil Score Bucket has the Highest Cibil Score in 2019
{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Cibil Score", "DPD Bucket"],
"trend": "null",
"to_start_date": "null",
"to_end_date": "null",
"growth": "null",
"variables_grpby": ["Branch"],
"filters": {"Region": ["North Carolina"]},
"not_in": {},
"num_filter": {},
"percent": "false",
"top": "1",
"bottom": "null"
}
''
Example 5:
User Question: With respect to Zone, Region, Branch, RM Name what is the Highest Loan Amount, Loan Tenure, Loan Outstanding, EMI Pending, Principal Outstanding
{
"start_date": "null",
"end_date": "null",
"time_stamp_col": "null",
"agg_columns": ["Loan Amount", "Loan Tenure", "Loan Outstanding", "EMI Pending", "Principal Outstanding"],
"trend": "null",
"to_start_date": "null",
"to_end_date": "null",
"growth": "null",
"variables_grpby": ["Zone", "Region", "Branch", "RM Name"],
"filters": {},
"not_in": {},
"num_filter": {},
"percent": "false",
"top": "1",
"bottom": "null"
}
Example 6:
User Question: Top 2 zones by Housing Loan in the year 2019
{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Housing Loan"],
"trend": "null",
"to_start_date": "null",
"to_end_date": "null",
"growth": "null",
"variables_grpby": ["Zone"],
"filters": {"Product": ["Home Loan"]},
"not_in": {},
"num_filter": {},
"percent": "false",
"top": "2",
"bottom": "null"
}
'''+ f'''Our test dataset has the following columns: {json_data_test}
User Question (to be converted): {user_question}'''
# Load the agent
agent = create_csv_agent(OpenAI(temperature=0), testData_fname, verbose=True)
gpt4_agent = create_csv_agent(ChatOpenAI(temperature=0, model_name="gpt-4-1106-preview"), testData_fname)
# Use the formatted question as the input to your agent
response = gpt4_agent.run(prompt_template)
# Print the response
print(user_question)
print(response)) and this is output token ( Percentage share of State by Value?
{
"start_date": "null",
"end_date": "null",
"time_stamp_col": "null",
"agg_columns": ["Value"],
"trend": "null",
"to_start_date": "null",
"to_end_date": "null",
"growth": "null",
"variables_grpby": ["State"],
"filters": {},
"not_in": {},
"num_filter": {},
"percent": "true",
"top": "null",
"bottom": "null"
})
### Idea or request for content:
_No response_ | can you tell me how to calculate what will be cost for this input tokens in prompt and output tokens in prompt | https://api.github.com/repos/langchain-ai/langchain/issues/13947/comments | 1 | 2023-11-28T04:42:02Z | 2024-03-13T20:00:27Z | https://github.com/langchain-ai/langchain/issues/13947 | 2,013,606,586 | 13,947 |
[
"hwchase17",
"langchain"
] | ### System Info
machine: mackbook pro Sonoma 14.1.1
package:
python = "3.10.13"
openai = "^0.28.1"
pandas = "^2.1.1"
ipython = "^8.16.0"
langchain = "^0.0.306"
python-dotenv = "^1.0.0"
seaborn = "^0.13.0"
tqdm = "^4.66.1"
torch = "^2.1.0"
transformers = "^4.35.2"
accelerate = "^0.24.1"
sentencepiece = "^0.1.99"
openllm = "^0.4.27"
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
follow the instruction https://python.langchain.com/docs/integrations/llms/openllm
1)openllm start dolly-v2
2)
from langchain.llms import OpenLLM
server_url = "http://localhost:3000" # Replace with remote host if you are running on a remote server
llm = OpenLLM(server_url=server_url)
llm("what is the meaning of life")
error:
line 229 /langchain/llms/openllm.py 220
AtrributionError: "HttpClient" object has not attribute 'configuration'
### Expected behavior
return a response from LLM | 'HTTPClient' object has no attribute 'configuration' | https://api.github.com/repos/langchain-ai/langchain/issues/13943/comments | 3 | 2023-11-28T03:28:12Z | 2024-03-13T20:01:59Z | https://github.com/langchain-ai/langchain/issues/13943 | 2,013,546,758 | 13,943 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain 0.0.340
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
After openai v1 sdk the previous method openai.proxy = {xxx} is no longer supported. But langchain is still using it.
It seems openai v1 sdk only support to set proxy on client level. see this issue on open ai sdk repo https://github.com/openai/openai-python/issues/825#issuecomment-1826047567
### Expected behavior
Should implement the way openai sdk required to set proxy | To support open v1 SDK proxy setting | https://api.github.com/repos/langchain-ai/langchain/issues/13939/comments | 1 | 2023-11-28T02:12:10Z | 2024-03-13T20:00:20Z | https://github.com/langchain-ai/langchain/issues/13939 | 2,013,460,359 | 13,939 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
It seems like that the agent can only take one variable "input".
I would like to create an agent with a custom prompt template that takes multiple variables whenever a user types something. like agent_executor.invoke({"input": "what is my name", "example":example, "user_profile":user_profile})
the custom prompt template looks like """
user input: {input}
example: {example}
user_profile : {user_profile}
### Suggestion:
_No response_ | Issue: create agent takes multiple variables | https://api.github.com/repos/langchain-ai/langchain/issues/13937/comments | 2 | 2023-11-28T01:27:57Z | 2024-03-13T20:00:24Z | https://github.com/langchain-ai/langchain/issues/13937 | 2,013,425,511 | 13,937 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
trying building devcontainer following steps in doc ~/langchain/.devcontainer/README.md, but stuck at step10/11
<img width="876" alt="Screen Shot 2023-11-27 at 7 56 30 PM" src="https://github.com/langchain-ai/langchain/assets/97558871/3b8c803c-bdce-410d-8b7a-19fb0d1bc692">
### Suggestion:
_No response_ | devcontainer fail to built | https://api.github.com/repos/langchain-ai/langchain/issues/13936/comments | 3 | 2023-11-28T00:57:23Z | 2023-11-29T04:10:15Z | https://github.com/langchain-ai/langchain/issues/13936 | 2,013,395,749 | 13,936 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
When utilizing the create_sql_agent module by LangChain to interact with a SQL database and generate SQL queries from natural language, I've encountered an issue with the responses. Currently, when I execute queries, the SQL Agent responds with placeholder information, citing security reasons for omitting the actual data. However, for my use case, it is crucial to receive the real information retrieved from the SQL database tables.
Also, it limits the information as well even with `top_k=50`. The database return 50 records but in the output it only shows 10-12 with the following output: Please note that this is a partial list. There are more ...
langchain == 0.0.313
langchain-experimental == 0.0.32
```
dbmssql = SQLDatabase.from_uri(
connection_url,
include_tables=[
"table_1",
"table_2",
"table_3",
"table_4",
],
view_support=False
)
chat_llm = ChatOpenAI(
model="gpt-4",
temperature=0,
verbose=True,
openai_api_key=openai_api_key,
request_timeout=600
)
def create_mssql_db_agent(dbmssql):
few_shot_docs = [
Document(page_content=question, metadata={
"sql_query": few_shots[question]})
for question in few_shots.keys()
]
vector_db = FAISS.from_documents(few_shot_docs, embeddings)
retriever = vector_db.as_retriever()
tool_description = """
This tool will help you understand similar examples to adapt them to the user question.
Input to this tool should be the user question.
"""
retriever_tool = create_retriever_tool(
retriever, name="sql_get_similar_examples", description=tool_description
)
custom_tool_list = [retriever_tool]
custom_suffix = """
I should first get the similar examples I know.
If the examples are enough to construct the query, I can build it.
Otherwise, I can then look at the tables in the database to see what I can query.
Then I should query the schema of the most relevant tables
"""
agent = create_sql_agent(
agent_executor_kwargs={"return_intermediate_steps": True},
llm=chat_llm,
toolkit=SQLDatabaseToolkit(db=dbmssql, llm=chat_llm),
verbose=True,
agent_type=AgentType.OPENAI_FUNCTIONS,
extra_tools=custom_tool_list,
suffix=custom_suffix,
top_k=50,
# return_intermediate_steps=True
)
return agent
```
### Suggestion:
_No response_ | Issue: create_sql_agent omits and limits actual information retrieved from SQL tables | https://api.github.com/repos/langchain-ai/langchain/issues/13931/comments | 4 | 2023-11-27T22:16:47Z | 2024-04-06T01:22:57Z | https://github.com/langchain-ai/langchain/issues/13931 | 2,013,215,135 | 13,931 |
[
"hwchase17",
"langchain"
] | ### Feature request
Allow intercepting agents' final answers and reporting any feedback on the final answers to the agents without ending the agent execution chain.
This will enable users to, for example, run validations on the final answer (e.g. whether the answer contains some keywords) or agent's state (whether agent has used a particular tool) and report issues to the agent so that it can fix the problems before ending the chain.
### Motivation
Today, we don't have a way to run an analysis on final answers and report problems (if any) to the agents so that they can fix the problems without losing the thoughts and observations of the current chain (if there's a way to achieve that today, please feel free to point me to it and close this issue). This feature will allow self-correction of final answers, further enhancing the capabilities of agents.
Some use cases I have in mind include
1. Validate the final answer to ensure that it conforms to some instructions, e.g. by making an LLM call.
2. Make sure the agent has used a set of tools to come up with the answer.
3. Apply some rules on the answer to determine whether answer is correct or not, e.g. whether the answer contains certain keywords.
### Your contribution
I'm not able to contribute currently, however, I might be able to pick this up in coming weeks if it seems useful to others as well. | Allow intercepting agents' final answers and reporting feedback to them | https://api.github.com/repos/langchain-ai/langchain/issues/13929/comments | 5 | 2023-11-27T18:23:38Z | 2024-03-17T16:06:36Z | https://github.com/langchain-ai/langchain/issues/13929 | 2,012,847,820 | 13,929 |
[
"hwchase17",
"langchain"
] | Hi everyone,
I'm trying to do something and I haven´t found enough information on the internet to make it work properly with Langchain. Here it is:
I want to develop a QA chat using pdfs as knowledge source, using as relevant documents the ones corresponding to a certain pdf that the user will choose with a select box. To achieve that:
1. I've built a Azure Search vector store in which all the embeddings of different documents are stored.
Each document's metadata looks something like this:
{
"@odata.context": "https://blublu.search.windows.net/indexes('embeddings')/$metadata#docs(*)",
"@search.nextPageParameters": {
"search": "*",
"top": null,
"skip": 50
},
"value": [
{
"@search.score": 1,
"id": "doc_embeddings_7102233d903cd1ac7475a60c373a716b57bf1586",
"title": "https://blahblah.blob.core.windows.net/documents/converted/100.pdf.txt",
"content": " Officer | Immediate line manager | Region/Division head or VP of the corresponding vertical | KFSL HR | | CEO | \n|Assistant Manager | | \n|Manager | CEO.\nv",
"content_vector": [
-0.0014578825,
-0.0058897766],
"tag": "",
"metadata": "{\"source\": \"[https://ask}"
},...}
3.With all this I'm using a ConversationalRetrievalChain to retrieve info from the vector store and using an llm to answer questions entered via prompt:
class FilteredRetriever:
def __init__(self, retriever, filter_prefix):
self.retriever = retriever
self.filter_prefix = filter_prefix
def retrieve(self, *args, **kwargs):
results = self.retriever.retrieve(*args, **kwargs)
return [doc for doc in results if doc['value']['title'].startswith(self.filter_prefix)]
source='https://blahblah.blob.core.windows.net/documents/converted/100.pdf.txt'
filtered_retriever = FilteredRetriever(self.vector_store.as_retriever(), source)
chain = ConversationalRetrievalChain(
retriever=filtered_retriever,
question_generator=question_generator,
combine_docs_chain=doc_chain,
return_source_documents=True,
# top_k_docs_for_context= self.k
)
But this is raising: instance of BaseRetriever expected (type=type_error.arbitrary_type; expected_arbitrary_type=BaseRetriever)
### Suggestion:
Have already referred [Filtering retrieval with ConversationalRetrievalChain](https://github.com/langchain-ai/langchain/issues/7474#top)
#7474 | Filtering Issue with ConversationRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/13924/comments | 2 | 2023-11-27T17:19:53Z | 2024-03-13T20:02:28Z | https://github.com/langchain-ai/langchain/issues/13924 | 2,012,745,465 | 13,924 |
[
"hwchase17",
"langchain"
] | - | - | https://api.github.com/repos/langchain-ai/langchain/issues/13923/comments | 1 | 2023-11-27T17:04:38Z | 2023-11-28T05:59:31Z | https://github.com/langchain-ai/langchain/issues/13923 | 2,012,721,002 | 13,923 |
[
"hwchase17",
"langchain"
] | @dosu-bot
I have a project where I need to extract product details from product links online. What is the best url loader for this use case?
Also, for the previous links that I already appended to a csv file called "Lalal". How can I create embeddings only for the new URL link that I will extract so that I dont have to embed the entire document every single time.
Please write the code for me in python. | Link to URL to load product details | https://api.github.com/repos/langchain-ai/langchain/issues/13920/comments | 6 | 2023-11-27T16:38:20Z | 2024-03-13T19:59:35Z | https://github.com/langchain-ai/langchain/issues/13920 | 2,012,673,073 | 13,920 |
[
"hwchase17",
"langchain"
] | ### System Info
Langchain all versions
### Who can help?
@hwchase17 @izzymsft
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Instantiate a new AzureCosmosDBVectorSearch with embeddings key different than vectorContent, and then you get this error
`Similarity index was not found for a vector similarity search query.`
This is because embeddings key is parametrised correctly
https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/azure_cosmos_db.py#L75
https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/azure_cosmos_db.py#L92
, but not used on index creation
https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/azure_cosmos_db.py#L224
### Expected behavior
The embedding_key param should be used to create the index properly | AzureCosmosDBVectorSearch index creation fixes document key | https://api.github.com/repos/langchain-ai/langchain/issues/13918/comments | 2 | 2023-11-27T16:11:15Z | 2024-03-13T20:01:29Z | https://github.com/langchain-ai/langchain/issues/13918 | 2,012,621,829 | 13,918 |
[
"hwchase17",
"langchain"
] | ### Feature request
I propose the addition of a new feature, a BinaryPyPdf loader, to the existing Langchain document loaders. This loader is designed to handle PDF files in a binary format, providing a more efficient and effective way of processing PDF documents within the Langchain project.
### Motivation
As a Langchain enthusiast, I noticed that the current document loaders lack a dedicated loader for handling PDF files in binary format. This often leads to inefficiencies and limitations when working with PDF documents. The addition of a BinaryPyPdf loader would address this gap and enhance the overall functionality and versatility of the Langchain document loaders.
### Your contribution
I have already developed a BinaryPyPdf loader using `pypdf` that is ready for integration into the Langchain project. I am prepared to submit a PR for this feature, following the guidelines outlined in the [`CONTRIBUTING.MD`](https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md). I look forward to the opportunity to contribute to the project and enhance its capabilities. | Addition of BinaryPyPdf Loader for Langchain Document Loaders | https://api.github.com/repos/langchain-ai/langchain/issues/13916/comments | 1 | 2023-11-27T16:04:01Z | 2024-03-13T19:55:56Z | https://github.com/langchain-ai/langchain/issues/13916 | 2,012,607,901 | 13,916 |
[
"hwchase17",
"langchain"
] | ### System Info
You cannot run `poetry install --with test` on a fresh build:
```
╭─ username@comp ~/path/to/coding
╰─➤ cd langchain2
ls
╭─username@comp ~/path/to/langchain2 ‹master›
╰─➤ ls
CITATION.cff Makefile cookbook libs pyproject.toml
LICENSE README.md docker poetry.lock templates
MIGRATE.md SECURITY.md docs poetry.toml
╭─username@comp ~/path/to/langchain2 ‹master›
╰─➤ poetry install --with test
Creating virtualenv langchain-monorepo in /path/to/langchain2/.venv
Installing dependencies from lock file
Package operations: 165 installs, 1 update, 0 removals
• Downgrading pip (23.3.1 -> 23.2.1)
• Installing attrs (23.1.0)
• Installing rpds-py (0.10.3)
• Installing referencing (0.30.2)
• Installing six (1.16.0)
• Installing jsonschema-specifications (2023.7.1)
• Installing platformdirs (3.11.0)
• Installing python-dateutil (2.8.2)
• Installing traitlets (5.11.1)
• Installing types-python-dateutil (2.8.19.14)
• Installing arrow (1.3.0)
• Installing entrypoints (0.4)
• Installing fastjsonschema (2.18.1)
• Installing jsonschema (4.19.1)
• Installing jupyter-core (5.3.2)
• Installing nest-asyncio (1.5.8)
• Installing pycparser (2.21)
• Installing pyzmq (25.1.1)
• Installing tornado (6.3.3)
• Installing cffi (1.16.0)
• Installing fqdn (1.5.1)
• Installing idna (3.4)
• Installing isoduration (20.11.0)
• Installing jsonpointer (2.4)
• Installing jupyter-client (7.4.9)
• Installing markupsafe (2.1.3)
• Installing nbformat (5.9.2)
• Installing ptyprocess (0.7.0)
• Installing rfc3339-validator (0.1.4)
• Installing rfc3986-validator (0.1.1)
• Installing soupsieve (2.5)
• Installing uri-template (1.3.0)
• Installing webcolors (1.13)
• Installing webencodings (0.5.1)
• Installing argon2-cffi-bindings (21.2.0): Pending...
• Installing argon2-cffi-bindings (21.2.0)
• Installing asttokens (2.4.0)
• Installing beautifulsoup4 (4.12.2)
• Installing bleach (6.0.0)
• Installing defusedxml (0.7.1)
• Installing executing (2.0.0)
• Installing jinja2 (3.1.2)
• Installing jupyterlab-pygments (0.2.2)
• Installing mistune (3.0.2)
• Installing nbclient (0.7.4)
• Installing packaging (23.2)
• Installing pandocfilters (1.5.0)
• Installing parso (0.8.3)
• Installing pure-eval (0.2.2)
• Installing pygments (2.16.1)
• Installing python-json-logger (2.0.7)
• Installing pyyaml (6.0.1)
• Installing sniffio (1.3.0)
• Installing terminado (0.17.1)
• Installing tinycss2 (1.2.1)
• Installing wcwidth (0.2.8)
• Installing anyio (3.7.1): Installing...
• Installing appnope (0.1.3): Installing...
• Installing anyio (3.7.1)
• Installing appnope (0.1.3)
• Installing argon2-cffi (23.1.0)
• Installing backcall (0.2.0)
• Installing certifi (2023.7.22)
• Installing charset-normalizer (3.3.0)
• Installing decorator (5.1.1)
• Installing jedi (0.19.1)
• Installing jupyter-events (0.7.0)
• Installing jupyter-server-terminals (0.4.4)
• Installing matplotlib-inline (0.1.6)
• Installing nbconvert (7.8.0)
• Installing overrides (7.4.0)
• Installing pexpect (4.8.0)
• Installing pickleshare (0.7.5)
• Installing prometheus-client (0.17.1)
• Installing prompt-toolkit (3.0.39)
• Installing send2trash (1.8.2)
• Installing stack-data (0.6.3)
• Installing urllib3 (2.0.6)
• Installing websocket-client (1.6.3)
• Installing babel (2.13.0)
• Installing comm (0.1.4)
• Installing debugpy (1.8.0)
• Installing ipython (8.12.3)
• Installing json5 (0.9.14)
• Installing jupyter-server (2.7.3)
• Installing psutil (5.9.5)
• Installing requests (2.31.0)
• Installing async-lru (2.0.4)
• Installing ipykernel (6.25.2)
• Installing jupyter-lsp (2.2.0)
• Installing jupyterlab-server (2.25.0)
• Installing notebook-shim (0.2.3)
• Installing fastcore (1.4.2)
• Installing ipython-genutils (0.2.0)
• Installing jupyterlab (4.0.6)
• Installing jupyterlab-widgets (3.0.9)
• Installing mdurl (0.1.2)
• Installing qtpy (2.4.0)
• Installing typing-extensions (4.8.0)
• Installing widgetsnbextension (4.0.9)
• Installing alabaster (0.7.13): Installing...
• Installing annotated-types (0.5.0): Pending...
• Installing alabaster (0.7.13)
• Installing annotated-types (0.5.0)
• Installing docutils (0.17.1)
• Installing frozenlist (1.4.0)
• Installing ghapi (0.1.22)
• Installing imagesize (1.4.1)
• Installing ipywidgets (8.1.1)
• Installing jupyter-console (6.6.3)
• Installing markdown-it-py (2.2.0)
• Installing multidict (6.0.4)
• Installing mypy-extensions (1.0.0)
• Installing notebook (7.0.4)
• Installing pydantic-core (2.10.1)
• Installing qtconsole (5.4.4)
• Installing sphinxcontrib-applehelp (1.0.4)
• Installing snowballstemmer (2.2.0)
• Installing sphinxcontrib-devhelp (1.0.2)
• Installing sphinxcontrib-htmlhelp (2.0.1)
• Installing sphinxcontrib-jsmath (1.0.1)
• Installing sphinxcontrib-qthelp (1.0.3)
• Installing sphinxcontrib-serializinghtml (1.1.5)
• Installing zipp (3.17.0)
• Installing aiosignal (1.3.1): Installing...
• Installing async-timeout (4.0.3): Pending...
• Installing aiosignal (1.3.1)
• Installing async-timeout (4.0.3)
• Installing click (8.1.7)
• Installing fastrelease (0.1.17)
• Installing importlib-metadata (6.8.0)
• Installing jupyter (1.0.0)
• Installing marshmallow (3.20.1)
• Installing mdit-py-plugins (0.3.5)
• Installing pathspec (0.11.2)
• Installing pydantic (2.4.2)
• Installing sphinx (4.5.0)
• Installing sqlalchemy (2.0.21)
• Installing tabulate (0.9.0)
• Installing tokenize-rt (5.2.0)
• Installing typing-inspect (0.9.0)
• Installing yarl (1.9.2)
• Installing aiohttp (3.8.5): Failed
ChefBuildError
Backend subprocess exited when trying to invoke build_wheel
*********************
* Accelerated build *
*********************
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-13-arm64-cpython-312
creating build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_ws.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/worker.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/multipart.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_response.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/client_ws.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/test_utils.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/tracing.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_exceptions.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_middlewares.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/http_exceptions.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_app.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/streams.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_protocol.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/log.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/client.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_urldispatcher.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_request.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/http_websocket.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/client_proto.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/locks.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/__init__.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_runner.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_server.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/base_protocol.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/payload.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/client_reqrep.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/http.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_log.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/resolver.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/formdata.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/payload_streamer.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_routedef.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/connector.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/client_exceptions.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/typedefs.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/hdrs.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_fileresponse.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/http_writer.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/tcp_helpers.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/helpers.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/http_parser.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/cookiejar.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/pytest_plugin.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/abc.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
running egg_info
writing aiohttp.egg-info/PKG-INFO
writing dependency_links to aiohttp.egg-info/dependency_links.txt
writing requirements to aiohttp.egg-info/requires.txt
writing top-level names to aiohttp.egg-info/top_level.txt
reading manifest file 'aiohttp.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'aiohttp' anywhere in distribution
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.pyd' found anywhere in distribution
warning: no previously-included files matching '*.so' found anywhere in distribution
warning: no previously-included files matching '*.lib' found anywhere in distribution
warning: no previously-included files matching '*.dll' found anywhere in distribution
warning: no previously-included files matching '*.a' found anywhere in distribution
warning: no previously-included files matching '*.obj' found anywhere in distribution
warning: no previously-included files found matching 'aiohttp/*.html'
no previously-included directories found matching 'docs/_build'
adding license file 'LICENSE.txt'
writing manifest file 'aiohttp.egg-info/SOURCES.txt'
copying aiohttp/_cparser.pxd -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/_find_header.pxd -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/_headers.pxi -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/_helpers.pyi -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/_helpers.pyx -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/_http_parser.pyx -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/_http_writer.pyx -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/_websocket.pyx -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/py.typed -> build/lib.macosx-13-arm64-cpython-312/aiohttp
creating build/lib.macosx-13-arm64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_cparser.pxd.hash -> build/lib.macosx-13-arm64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_find_header.pxd.hash -> build/lib.macosx-13-arm64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_helpers.pyi.hash -> build/lib.macosx-13-arm64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_helpers.pyx.hash -> build/lib.macosx-13-arm64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_http_parser.pyx.hash -> build/lib.macosx-13-arm64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_http_writer.pyx.hash -> build/lib.macosx-13-arm64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_websocket.pyx.hash -> build/lib.macosx-13-arm64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/hdrs.py.hash -> build/lib.macosx-13-arm64-cpython-312/aiohttp/.hash
running build_ext
building 'aiohttp._websocket' extension
creating build/temp.macosx-13-arm64-cpython-312
creating build/temp.macosx-13-arm64-cpython-312/aiohttp
clang -fno-strict-overflow -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX13.sdk -I/private/var/folders/z4/nphh3sds4zsckwzc8kcht7h00000gn/T/tmpmjkdtpaa/.venv/include -I/opt/homebrew/opt/[email protected]/Frameworks/Python.framework/Versions/3.12/include/python3.12 -c aiohttp/_websocket.c -o build/temp.macosx-13-arm64-cpython-312/aiohttp/_websocket.o
aiohttp/_websocket.c:1475:17: warning: 'Py_OptimizeFlag' is deprecated [-Wdeprecated-declarations]
if (unlikely(!Py_OptimizeFlag)) {
^
/opt/homebrew/opt/[email protected]/Frameworks/Python.framework/Versions/3.12/include/python3.12/cpython/pydebug.h:13:1: note: 'Py_OptimizeFlag' has been explicitly marked deprecated here
Py_DEPRECATED(3.12) PyAPI_DATA(int) Py_OptimizeFlag;
^
/opt/homebrew/opt/[email protected]/Frameworks/Python.framework/Versions/3.12/include/python3.12/pyport.h:317:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
aiohttp/_websocket.c:2680:27: warning: 'ma_version_tag' is deprecated [-Wdeprecated-declarations]
return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0;
^
aiohttp/_websocket.c:1118:65: note: expanded from macro '__PYX_GET_DICT_VERSION'
#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag)
^
/opt/homebrew/opt/[email protected]/Frameworks/Python.framework/Versions/3.12/include/python3.12/cpython/dictobject.h:22:5: note: 'ma_version_tag' has been explicitly marked deprecated here
Py_DEPRECATED(3.12) uint64_t ma_version_tag;
^
/opt/homebrew/opt/[email protected]/Frameworks/Python.framework/Versions/3.12/include/python3.12/pyport.h:317:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
aiohttp/_websocket.c:2692:36: warning: 'ma_version_tag' is deprecated [-Wdeprecated-declarations]
return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0;
^
aiohttp/_websocket.c:1118:65: note: expanded from macro '__PYX_GET_DICT_VERSION'
#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag)
^
/opt/homebrew/opt/[email protected]/Frameworks/Python.framework/Versions/3.12/include/python3.12/cpython/dictobject.h:22:5: note: 'ma_version_tag' has been explicitly marked deprecated here
Py_DEPRECATED(3.12) uint64_t ma_version_tag;
^
/opt/homebrew/opt/[email protected]/Frameworks/Python.framework/Versions/3.12/include/python3.12/pyport.h:317:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
aiohttp/_websocket.c:2696:56: warning: 'ma_version_tag' is deprecated [-Wdeprecated-declarations]
if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict)))
^
aiohttp/_websocket.c:1118:65: note: expanded from macro '__PYX_GET_DICT_VERSION'
#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag)
^
/opt/homebrew/opt/[email protected]/Frameworks/Python.framework/Versions/3.12/include/python3.12/cpython/dictobject.h:22:5: note: 'ma_version_tag' has been explicitly marked deprecated here
Py_DEPRECATED(3.12) uint64_t ma_version_tag;
^
/opt/homebrew/opt/[email protected]/Frameworks/Python.framework/Versions/3.12/include/python3.12/pyport.h:317:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
aiohttp/_websocket.c:2741:9: warning: 'ma_version_tag' is deprecated [-Wdeprecated-declarations]
__PYX_PY_DICT_LOOKUP_IF_MODIFIED(
^
aiohttp/_websocket.c:1125:16: note: expanded from macro '__PYX_PY_DICT_LOOKUP_IF_MODIFIED'
if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\
^
aiohttp/_websocket.c:1118:65: note: expanded from macro '__PYX_GET_DICT_VERSION'
#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag)
^
/opt/homebrew/opt/[email protected]/Frameworks/Python.framework/Versions/3.12/include/python3.12/cpython/dictobject.h:22:5: note: 'ma_version_tag' has been explicitly marked deprecated here
Py_DEPRECATED(3.12) uint64_t ma_version_tag;
^
/opt/homebrew/opt/[email protected]/Frameworks/Python.framework/Versions/3.12/include/python3.12/pyport.h:317:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
aiohttp/_websocket.c:2741:9: warning: 'ma_version_tag' is deprecated [-Wdeprecated-declarations]
__PYX_PY_DICT_LOOKUP_IF_MODIFIED(
^
aiohttp/_websocket.c:1129:30: note: expanded from macro '__PYX_PY_DICT_LOOKUP_IF_MODIFIED'
__pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\
^
aiohttp/_websocket.c:1118:65: note: expanded from macro '__PYX_GET_DICT_VERSION'
#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag)
^
/opt/homebrew/opt/[email protected]/Frameworks/Python.framework/Versions/3.12/include/python3.12/cpython/dictobject.h:22:5: note: 'ma_version_tag' has been explicitly marked deprecated here
Py_DEPRECATED(3.12) uint64_t ma_version_tag;
^
/opt/homebrew/opt/[email protected]/Frameworks/Python.framework/Versions/3.12/include/python3.12/pyport.h:317:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
aiohttp/_websocket.c:3042:55: error: no member named 'ob_digit' in 'struct _longobject'
const digit* digits = ((PyLongObject*)x)->ob_digit;
~~~~~~~~~~~~~~~~~~ ^
aiohttp/_websocket.c:3097:55: error: no member named 'ob_digit' in 'struct _longobject'
const digit* digits = ((PyLongObject*)x)->ob_digit;
~~~~~~~~~~~~~~~~~~ ^
aiohttp/_websocket.c:3238:55: error: no member named 'ob_digit' in 'struct _longobject'
const digit* digits = ((PyLongObject*)x)->ob_digit;
~~~~~~~~~~~~~~~~~~ ^
aiohttp/_websocket.c:3293:55: error: no member named 'ob_digit' in 'struct _longobject'
const digit* digits = ((PyLongObject*)x)->ob_digit;
~~~~~~~~~~~~~~~~~~ ^
aiohttp/_websocket.c:3744:47: error: no member named 'ob_digit' in 'struct _longobject'
const digit* digits = ((PyLongObject*)b)->ob_digit;
~~~~~~~~~~~~~~~~~~ ^
6 warnings and 5 errors generated.
error: command '/usr/bin/clang' failed with exit code 1
at ~/.local/pipx/venvs/poetry/lib/python3.12/site-packages/poetry/installation/chef.py:164 in _prepare
160│
161│ error = ChefBuildError("\n\n".join(message_parts))
162│
163│ if error is not None:
→ 164│ raise error from None
165│
166│ return path
167│
168│ def _prepare_sdist(self, archive: Path, destination: Path | None = None) -> Path:
Note: This error originates from the build backend, and is likely not a problem with poetry but with aiohttp (3.8.5) not supporting PEP 517 builds. You can verify this by running 'pip wheel --no-cache-dir --use-pep517 "aiohttp (==3.8.5)"'.
• Installing black (23.10.1)
• Installing colorama (0.4.6)
• Installing dataclasses-json (0.6.1)
• Installing dnspython (2.4.2)
• Installing jsonpatch (1.33)
• Installing jupyter-cache (0.6.1)
• Installing langsmith (0.0.63)
• Installing livereload (2.6.3)
• Installing myst-parser (0.18.1)
• Installing nbdev (1.2.0)
• Installing numpy (1.24.4): Failed
ChefBuildError
Backend 'setuptools.build_meta:__legacy__' is not available.
Traceback (most recent call last):
File "/Users/username/.local/pipx/venvs/poetry/lib/python3.12/site-packages/pyproject_hooks/_in_process/_in_process.py", line 77, in _build_backend
obj = import_module(mod_path)
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/[email protected]/3.12.0/Frameworks/Python.framework/Versions/3.12/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1381, in _gcd_import
File "<frozen importlib._bootstrap>", line 1354, in _find_and_load
File "<frozen importlib._bootstrap>", line 1304, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1381, in _gcd_import
File "<frozen importlib._bootstrap>", line 1354, in _find_and_load
File "<frozen importlib._bootstrap>", line 1325, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 929, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 994, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/private/var/folders/z4/nphh3sds4zsckwzc8kcht7h00000gn/T/tmphs3uy5rx/.venv/lib/python3.12/site-packages/setuptools/__init__.py", line 10, in <module>
import distutils.core
ModuleNotFoundError: No module named 'distutils'
at ~/.local/pipx/venvs/poetry/lib/python3.12/site-packages/poetry/installation/chef.py:164 in _prepare
160│
161│ error = ChefBuildError("\n\n".join(message_parts))
162│
163│ if error is not None:
→ 164│ raise error from None
165│
166│ return path
167│
168│ def _prepare_sdist(self, archive: Path, destination: Path | None = None) -> Path:
Note: This error originates from the build backend, and is likely not a problem with poetry but with numpy (1.24.4) not supporting PEP 517 builds. You can verify this by running 'pip wheel --no-cache-dir --use-pep517 "numpy (==1.24.4)"'.
• Installing numpydoc (1.2)
• Installing pydata-sphinx-theme (0.8.1)
• Installing sphinxcontrib-jquery (4.1)
• Installing tenacity (8.2.3)
Warning: The file chosen for install of executing 2.0.0 (executing-2.0.0-py2.py3-none-any.whl) is yanked. Reason for being yanked: Released 2.0.1 which is equivalent but added 'python_requires = >=3.5' so that pip install with Python 2 uses the previous version 1.2.0.
```
Here is my poetry info:
```
╰─➤ poetry env info 1 ↵
Virtualenv
Python: 3.12.0
Implementation: CPython
Path: /Users/username/path/to/langchain2/.venv
Executable: /Users/username/path/to/langchain2/.venv/bin/python
Valid: True
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I believe you should be able to do:
1. git clone
2. poetry install --with test
### Expected behavior
I would expect that all packages specified by the lockfile could be installed successfully. | poetry install --with test issue | https://api.github.com/repos/langchain-ai/langchain/issues/13912/comments | 2 | 2023-11-27T14:07:30Z | 2023-11-27T23:07:40Z | https://github.com/langchain-ai/langchain/issues/13912 | 2,012,359,254 | 13,912 |
[
"hwchase17",
"langchain"
] | ### Feature request
MultiVectorRetriever is really helpful to add summary and hypothetical queries of our documents to improve the retrievers but only these two are stored in the vectorstore, instead the entire document is within a BaseStore (Memory or Local).
The main issue is that:
- the Memory one is not going to persist across restarts
- the File one is going to create tons of files
Why not keeping the original document in the vectorstore as well instead of using external file/memory?
### Motivation
Keep documents, questions and summaries on the same vectorstore.
### Your contribution
I could work on that but I would like to know your point of view. | MultiVector Retriever BaseStore | https://api.github.com/repos/langchain-ai/langchain/issues/13909/comments | 12 | 2023-11-27T11:33:30Z | 2024-07-24T05:23:56Z | https://github.com/langchain-ai/langchain/issues/13909 | 2,012,075,695 | 13,909 |
[
"hwchase17",
"langchain"
] | ### System Info
Ubuntu 23.10
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain import OpenAI, SQLDatabase
from snowflake.snowpark import Session
from langchain.chains import create_sql_query_chain
from dotenv import load_dotenv
import os
from urllib.parse import quote
load_dotenv()
# use the env vars in comments above to set the vars below
OpenAI_API_KEY = os.getenv("OPENAI_API_KEY")
snowflake_account = os.getenv("ACCOUNT")
username = os.getenv("USER")
password = os.getenv("SNOWSQL_PWD")
warehouse = os.getenv("WAREHOUSE")
database = 'LANGCHAIN_DEMO_DB' #os.getenv("DATABASE")
schema = 'PUBLIC' #os.getenv("SCHEMA")
role = os.getenv("ROLE")
# print out all env vars using f-strings each on a separate line but x out password
print(f"OpenAI_API_KEY: {'x' * len(OpenAI_API_KEY)}")
print(f"snowflake_account: {snowflake_account}")
#print(f"username: {username}")
#print(f"password: {password}")
print(f"warehouse: {warehouse}")
print(f"database: {database}")
print(f"schema: {schema}")
print(f"role: {role}")
encoded_password = quote(password, safe='')
```
but it works in my Jupyter notebook

https://medium.com/@muriithicliffernest/snowflake-langchain-generating-sql-queries-from-natural-language-queries-12c4e2918631 is the tutorial I followed for the .ipynb.
```
pip install --upgrade pip
pip install "snowflake-snowpark-python[pandas]" snowflake-sqlalchemy
pip install langchain openai langchain-experimental jupyter
```
are the instructions to install packages in that Medium article so I matched versions for both the conda env I'm using for the .py and .ipynb. Even if I use the same `langchain-snowlfake` env for both the error is still there. See the red line under `from langchain import OpenAI, SQLDatabase
` in the right half of the image which is showing `lanchain-sql.py`
### Expected behavior
The import should work, no red line. | imports of OpenAI and SQLDatabase don't work in .py file | https://api.github.com/repos/langchain-ai/langchain/issues/13906/comments | 3 | 2023-11-27T11:04:57Z | 2024-02-09T02:11:48Z | https://github.com/langchain-ai/langchain/issues/13906 | 2,012,028,998 | 13,906 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Sometimes when interacting with the bot using Retrieval QA chain, it just stops at Entering new RetrievalQA chain...
No response, it doesn't give the response, it just stops,
I am using qa.acall
and using the async callback handler
how to fix that, as that is unnacceptable
### Suggestion:
_No response_ | Issue: Retrieval QA Chain not giving response after Entering new RetrievalQA chain... | https://api.github.com/repos/langchain-ai/langchain/issues/13900/comments | 1 | 2023-11-27T07:12:34Z | 2024-03-13T20:02:37Z | https://github.com/langchain-ai/langchain/issues/13900 | 2,011,642,730 | 13,900 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
```python
llm = ChatOpenAI(model=gpt_4, temperature=0, api_key=os.environ['OPENAI_API_KEY'])
llm_chain = LLMChain(llm=llm, prompt=react_prompt)
tool_names = [tool.name]
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=react_output_parser,
stop=["\nObservation:"],
allowed_tools=tool_names,
max_execution_time=240,
max_iterations=120,
handle_parsing_errors=True
)
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=[tool], verbose=True)
response = agent_executor.run(textual_description)
```
This is my setup for AgentExecutor. It is prompted to solve OpenAI gym's Taxi problem and only stop after the passenger is dropped off at the destination. But as the title suggests the AgentExecutor chain finishes before reaching the stopping limits or achieving the stopping condition.
Also when I use GPT-3 model sometimes it stops following the ReAct template occasionally and raises errors as my output parser cannot process the output correctly. I wonder if there is a way to change that.
### Suggestion:
_No response_ | Issue: AgentExecutor stopping before reaching the set max_iteration and max_execution_time limits without meeting the stop condition | https://api.github.com/repos/langchain-ai/langchain/issues/13897/comments | 4 | 2023-11-27T04:43:45Z | 2023-11-29T13:52:34Z | https://github.com/langchain-ai/langchain/issues/13897 | 2,011,474,672 | 13,897 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain==0.0.316
python==3.10.13
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
i'm using the code here:https://python.langchain.com/docs/integrations/llms/chatglm
Here is the full error output:
Traceback (most recent call last):
File "C:\Users\vic\Desktop\chatGLM\.conda\lib\site-packages\requests\models.py", line 971, in json
return complexjson.loads(self.text, **kwargs)
File "C:\Users\vic\Desktop\chatGLM\.conda\lib\json\__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "C:\Users\vic\Desktop\chatGLM\.conda\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Users\vic\Desktop\chatGLM\.conda\lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\vic\AppData\Roaming\Python\Python310\site-packages\langchain\llms\chatglm.py", line 107, in _call
parsed_response = response.json()
File "C:\Users\vic\Desktop\chatGLM\.conda\lib\site-packages\requests\models.py", line 975, in json
raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Users\vic\Desktop\chatGLM\test_server.py", line 36, in <module>
print(llm_chain.run(question))
File "C:\Users\vic\AppData\Roaming\Python\Python310\site-packages\langchain\chains\base.py", line 503, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "C:\Users\vic\AppData\Roaming\Python\Python310\site-packages\langchain\chains\base.py", line 308, in __call__
raise e
File "C:\Users\vic\AppData\Roaming\Python\Python310\site-packages\langchain\chains\base.py", line 302, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Users\vic\AppData\Roaming\Python\Python310\site-packages\langchain\chains\llm.py", line 93, in _call
response = self.generate([inputs], run_manager=run_manager)
File "C:\Users\vic\AppData\Roaming\Python\Python310\site-packages\langchain\chains\llm.py", line 103, in generate
return self.llm.generate_prompt(
File "C:\Users\vic\AppData\Roaming\Python\Python310\site-packages\langchain\llms\base.py", line 497, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "C:\Users\vic\AppData\Roaming\Python\Python310\site-packages\langchain\llms\base.py", line 646, in generate
output = self._generate_helper(
File "C:\Users\vic\AppData\Roaming\Python\Python310\site-packages\langchain\llms\base.py", line 534, in _generate_helper
raise e
File "C:\Users\vic\AppData\Roaming\Python\Python310\site-packages\langchain\llms\base.py", line 521, in _generate_helper
self._generate(
File "C:\Users\vic\AppData\Roaming\Python\Python310\site-packages\langchain\llms\base.py", line 1043, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "C:\Users\vic\AppData\Roaming\Python\Python310\site-packages\langchain\llms\chatglm.py", line 120, in _call
raise ValueError(
ValueError: Error raised during decoding response from inference endpoint: Expecting value: line 1 column 1 (char 0).
### Expected behavior
output of ChatGLM's response is missing
print(response)
<Response [200]> | Error raised during decoding response from inference endpoint when using ChatGLM | https://api.github.com/repos/langchain-ai/langchain/issues/13896/comments | 2 | 2023-11-27T04:12:28Z | 2024-03-13T20:03:46Z | https://github.com/langchain-ai/langchain/issues/13896 | 2,011,448,434 | 13,896 |
[
"hwchase17",
"langchain"
] | ### Feature request
We are working on a way to add a multi-input tool to LangChain for searching Reddit posts. Integrating the API as a tool will allow agents to search for posts using a specific search query and some query parameters like sort, time_filter, subreddit etc. to respond to prompts. The tool will use search functionality provided by [the `praw` package](https://praw.readthedocs.io/en/stable/code_overview/models/subreddit.html#praw.models.Subreddit.search).
### Motivation
Although LangChain currently has a document loader for Reddit (RedditPostsLoader), it is more centred around subreddit and username to load posts and we want to create our tool to provide more functionalities. Our tool will offer functionality for sorting and filtering by time, which is currently not handled by RedditPostsLoader. With this tool, agents can respond to prompts by interacting with the API without the user having to manually load the Reddit posts. The multi-input nature of the tool will make it useful for responding to more diverse prompts and we hope that users can use it to better leverage [multi-input tool](https://python.langchain.com/docs/modules/agents/tools/multi_input_tool) and [shared memory](https://python.langchain.com/docs/modules/agents/how_to/sharedmemory_for_tools) functionalities already provided by LangChain.
### Your contribution
We have our code already prepared and we will be submitting a PR soon. As encouraged by contributing.md, we have added integration tests, a notebook example, and edits for documentation generation. `praw` has also been added as an optional dependency. | Adding a multi-input Reddit search tool | https://api.github.com/repos/langchain-ai/langchain/issues/13891/comments | 2 | 2023-11-27T02:16:19Z | 2023-12-11T03:21:33Z | https://github.com/langchain-ai/langchain/issues/13891 | 2,011,359,518 | 13,891 |
[
"hwchase17",
"langchain"
] | ### System Info
langchain version 0.0.340
Python version: 3.11.5
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create an Obsidian template with a [template variable](https://help.obsidian.md/Plugins/Templates#Template+variables) in the [properties](https://help.obsidian.md/Editing+and+formatting/Properties#Property+format) section of the file.
2. Attempt to load a directory containing that template file using [ObsidianLoader](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/document_loaders/obsidian.py).
```shell
$ echo -e "---\nyear: {{date:YYYY}}\n---" > vault/template.md
$ python
>>> from langchain.document_loaders.obsidian import ObsidianLoader
>>> loader = ObsidianLoader('vault')
>>> loader.load()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/user/.conda/envs/localai/lib/python3.11/site-packages/langchain/document_loaders/obsidian.py", line 115, in load
front_matter = self._parse_front_matter(text)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.conda/envs/localai/lib/python3.11/site-packages/langchain/document_loaders/obsidian.py", line 48, in _parse_front_matter
front_matter = yaml.safe_load(match.group(1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.conda/envs/localai/lib/python3.11/site-packages/yaml/__init__.py", line 125, in safe_load
return load(stream, SafeLoader)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.conda/envs/localai/lib/python3.11/site-packages/yaml/__init__.py", line 81, in load
return loader.get_single_data()
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.conda/envs/localai/lib/python3.11/site-packages/yaml/constructor.py", line 51, in get_single_data
return self.construct_document(node)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.conda/envs/localai/lib/python3.11/site-packages/yaml/constructor.py", line 60, in construct_document
for dummy in generator:
File "/home/user/.conda/envs/localai/lib/python3.11/site-packages/yaml/constructor.py", line 413, in construct_yaml_map
value = self.construct_mapping(node)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.conda/envs/localai/lib/python3.11/site-packages/yaml/constructor.py", line 218, in construct_mapping
return super().construct_mapping(node, deep=deep)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.conda/envs/localai/lib/python3.11/site-packages/yaml/constructor.py", line 141, in construct_mapping
raise ConstructorError("while constructing a mapping", node.start_mark,
yaml.constructor.ConstructorError: while constructing a mapping
in "<unicode string>", line 1, column 7:
year: {{date:YYYY}}
^
found unhashable key
in "<unicode string>", line 1, column 8:
year: {{date:YYYY}}
```
### Expected behavior
[Template variables](https://help.obsidian.md/Plugins/Templates#Template+variables) are a feature in Obsidian and including them in the properties section of a file is perfectly valid, so [ObsidianLoader](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/document_loaders/obsidian.py) should have no issue loading a directory that includes a file that has template variable in its properties. | ObsidianLoader fails when encountering template variables in the properties frontmatter of a file | https://api.github.com/repos/langchain-ai/langchain/issues/13887/comments | 1 | 2023-11-27T01:05:47Z | 2024-03-13T20:01:25Z | https://github.com/langchain-ai/langchain/issues/13887 | 2,011,308,854 | 13,887 |
[
"hwchase17",
"langchain"
] | ### Feature request
If I want to use VectorStoreRetrieverMemory to store my users' chat memories, I need to search and store them using user_id and session_id. However, memory.save_context doesn't have a 'metadata' option.
### Motivation
I want to associate chat merory with single user
### Your contribution
I can't submit PR | storing metadata with the VectorStoreRetrieverMemory memory module | https://api.github.com/repos/langchain-ai/langchain/issues/13876/comments | 2 | 2023-11-26T15:14:26Z | 2024-03-13T19:55:51Z | https://github.com/langchain-ai/langchain/issues/13876 | 2,011,079,659 | 13,876 |
[
"hwchase17",
"langchain"
] | ### System Info
Python version: 3.11.5
Langchain version: 0.0.316
### Who can help?
@3coins
@hw
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am using Amazon Kendra as vector store to retrieve relevant documents as part of a Q&A application. As `UserContext` I am using User token:
```
def get_kendra_json_token(user_name: str, groups: List[str]):
kendra_json_token = {
'username': user_name,
'groups': groups
}
return kendra_json_token
```
This output is subsequently converted to: `'user_context': {'Token': json.dumps(kendra_json_token)}`
Everything is fine when I build the Retriever:
```
def get_kendra_doc_retriever(inputs: KendraRequest) -> AmazonKendraRetriever:
try:
kendra_client = boto3.client("kendra", os.environ.get('AWS_REGION'))
retriever = AmazonKendraRetriever(
index_id=inputs.kendra_index_id,
top_k=get_param(AIAssistantParam.NB_KENDRA_DOCS),
client=kendra_client,
attribute_filter=inputs.attribute_filter,
user_context=inputs.user_context
)
logger.info(f'Kendra retriever successfully instantiated')
return retriever
```
But then, when I call `get_relevant_documents`:
```
def ask_question(
chain: Chain,
retriever: AmazonKendraRetriever,
question: str
) -> Response:
try:
context = retriever.get_relevant_documents(question)
```
I get this exception: `An error occurred (AccessDeniedException) when calling the Retrieve operation: The provided JSON token isn't valid. The username couldn't be parsed. Generate a new token with username as an array of strings and try your request again.` Of course, `username` should be an string.
If I change the code doing this (swapping the content of `user_name` and `groups` in the user token):
```
def get_kendra_json_token(user_name: List[str], groups: str):
kendra_json_token = {
'username': groups,
'groups': user_name
}
return kendra_json_token
```
everything works fine. It is like `user_name` and `groups` inputs parameters are messed up somewhere.
### Expected behavior
No exception should be raised when creating the user token as explained in the description above. | KENDRA: issue with user_context parameter when using get_relevant_documents method (langchain.retrievers.kendra.AmazonKendraRetriever) | https://api.github.com/repos/langchain-ai/langchain/issues/13870/comments | 1 | 2023-11-26T09:55:33Z | 2023-12-11T15:22:58Z | https://github.com/langchain-ai/langchain/issues/13870 | 2,010,975,800 | 13,870 |
[
"hwchase17",
"langchain"
] | ### Feature request
Currently the `_search` function in `ElasticsearchStore` assumes that the `hit` object returned in the search has a `metadata` field under `_source`:
```python
hit["_source"]["metadata"][field] = hit["_source"][field]
```
However, this is not the case in the index I work with - it does not have a `metadata` field. Due to that, an exception is raised.
Note that the following code does not help -
```python
if "metadata" not in fields:
fields.append("metadata")
```
The index still does not return any `metadata`.
I assume that in indexes created by `ElasticsearchStore` the `metadata` field is forced, and therefor there is no such issue. However, when using indexes created by external tools, it is better not to assume that the field exists, and support the case where it doesn't.
### Motivation
I'd prefer to re-use the existing `ElasticsearchStore` instead of my own implementation of it.
### Your contribution
I think I can contribute a PR handling this issue, if the admins confirm the feature request. | Support for elastic index without metadata field | https://api.github.com/repos/langchain-ai/langchain/issues/13869/comments | 1 | 2023-11-26T09:33:34Z | 2024-03-13T19:56:05Z | https://github.com/langchain-ai/langchain/issues/13869 | 2,010,969,350 | 13,869 |
[
"hwchase17",
"langchain"
] | ### System Info
#### Environment variable
```bash
BENTOML_DEBUG=''
BENTOML_QUIET=''
BENTOML_BUNDLE_LOCAL_BUILD=''
BENTOML_DO_NOT_TRACK=''
BENTOML_CONFIG=''
BENTOML_CONFIG_OPTIONS=''
BENTOML_PORT=''
BENTOML_HOST=''
BENTOML_API_WORKERS=''
```
#### System information
`bentoml`: 1.1.10
`python`: 3.11.5
`platform`: Linux-6.2.0-37-generic-x86_64-with-glibc2.35
`uid_gid`: 1000:1000
`conda`: 23.7.4
`in_conda_env`: True
<details><summary><code>conda_packages</code></summary>
<br>
```yaml
name: openllm
channels:
- defaults
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- bzip2=1.0.8=h7b6447c_0
- ca-certificates=2023.08.22=h06a4308_0
- ld_impl_linux-64=2.38=h1181459_1
- libffi=3.4.4=h6a678d5_0
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- libuuid=1.41.5=h5eee18b_0
- ncurses=6.4=h6a678d5_0
- openssl=3.0.12=h7f8727e_0
- pip=23.3.1=py311h06a4308_0
- python=3.11.5=h955ad1f_0
- readline=8.2=h5eee18b_0
- setuptools=68.0.0=py311h06a4308_0
- sqlite=3.41.2=h5eee18b_0
- tk=8.6.12=h1ccaba5_0
- wheel=0.41.2=py311h06a4308_0
- xz=5.4.2=h5eee18b_0
- zlib=1.2.13=h5eee18b_0
- pip:
- accelerate==0.24.1
- aiohttp==3.9.0
- aiosignal==1.3.1
- anyio==3.7.1
- appdirs==1.4.4
- asgiref==3.7.2
- attrs==23.1.0
- beautifulsoup4==4.12.2
- bentoml==1.1.10
- bitsandbytes==0.41.2.post2
- build==0.10.0
- cattrs==23.1.2
- certifi==2023.11.17
- charset-normalizer==3.3.2
- circus==0.18.0
- click==8.1.7
- click-option-group==0.5.6
- cloudpickle==3.0.0
- coloredlogs==15.0.1
- contextlib2==21.6.0
- cuda-python==12.3.0
- dataclasses-json==0.6.2
- datasets==2.15.0
- deepmerge==1.1.0
- deprecated==1.2.14
- dill==0.3.7
- distlib==0.3.7
- distro==1.8.0
- einops==0.7.0
- fastapi==0.104.1
- fastcore==1.5.29
- filelock==3.13.1
- filetype==1.2.0
- frozenlist==1.4.0
- fs==2.4.16
- fsspec==2023.10.0
- ghapi==1.0.4
- greenlet==3.0.1
- h11==0.14.0
- httpcore==1.0.2
- httptools==0.6.1
- httpx==0.25.2
- huggingface-hub==0.19.4
- humanfriendly==10.0
- idna==3.6
- importlib-metadata==6.8.0
- inflection==0.5.1
- jinja2==3.1.2
- jsonpatch==1.33
- jsonpointer==2.4
- jsonschema==4.20.0
- jsonschema-specifications==2023.11.1
- langchain==0.0.340
- langsmith==0.0.66
- markdown-it-py==3.0.0
- markupsafe==2.1.3
- marshmallow==3.20.1
- mdurl==0.1.2
- mpmath==1.3.0
- msgpack==1.0.7
- multidict==6.0.4
- multiprocess==0.70.15
- mypy-extensions==1.0.0
- networkx==3.2.1
- ninja==1.11.1.1
- numpy==1.26.2
- nvidia-cublas-cu12==12.1.3.1
- nvidia-cuda-cupti-cu12==12.1.105
- nvidia-cuda-nvrtc-cu12==12.1.105
- nvidia-cuda-runtime-cu12==12.1.105
- nvidia-cudnn-cu12==8.9.2.26
- nvidia-cufft-cu12==11.0.2.54
- nvidia-curand-cu12==10.3.2.106
- nvidia-cusolver-cu12==11.4.5.107
- nvidia-cusparse-cu12==12.1.0.106
- nvidia-ml-py==11.525.150
- nvidia-nccl-cu12==2.18.1
- nvidia-nvjitlink-cu12==12.3.101
- nvidia-nvtx-cu12==12.1.105
- openllm==0.4.28
- openllm-client==0.4.28
- openllm-core==0.4.28
- opentelemetry-api==1.20.0
- opentelemetry-instrumentation==0.41b0
- opentelemetry-instrumentation-aiohttp-client==0.41b0
- opentelemetry-instrumentation-asgi==0.41b0
- opentelemetry-sdk==1.20.0
- opentelemetry-semantic-conventions==0.41b0
- opentelemetry-util-http==0.41b0
- optimum==1.14.1
- orjson==3.9.10
- packaging==23.2
- pandas==2.1.3
- pathspec==0.11.2
- pillow==10.1.0
- pip-requirements-parser==32.0.1
- pip-tools==7.3.0
- platformdirs==4.0.0
- prometheus-client==0.19.0
- protobuf==4.25.1
- psutil==5.9.6
- pyarrow==14.0.1
- pyarrow-hotfix==0.6
- pydantic==1.10.13
- pygments==2.17.2
- pyparsing==3.1.1
- pyproject-hooks==1.0.0
- python-dateutil==2.8.2
- python-dotenv==1.0.0
- python-json-logger==2.0.7
- python-multipart==0.0.6
- pytz==2023.3.post1
- pyyaml==6.0.1
- pyzmq==25.1.1
- ray==2.8.0
- referencing==0.31.0
- regex==2023.10.3
- requests==2.31.0
- rich==13.7.0
- rpds-py==0.13.1
- safetensors==0.4.0
- schema==0.7.5
- scipy==1.11.4
- sentencepiece==0.1.99
- simple-di==0.1.5
- six==1.16.0
- sniffio==1.3.0
- soupsieve==2.5
- sqlalchemy==2.0.23
- starlette==0.27.0
- sympy==1.12
- tenacity==8.2.3
- tokenizers==0.15.0
- torch==2.1.0
- tornado==6.3.3
- tqdm==4.66.1
- transformers==4.35.2
- triton==2.1.0
- typing-extensions==4.8.0
- typing-inspect==0.9.0
- tzdata==2023.3
- urllib3==2.1.0
- uvicorn==0.24.0.post1
- uvloop==0.19.0
- virtualenv==20.24.7
- vllm==0.2.2
- watchfiles==0.21.0
- websockets==12.0
- wrapt==1.16.0
- xformers==0.0.22.post7
- xxhash==3.4.1
- yarl==1.9.3
- zipp==3.17.0
prefix: /home/lolevsky/anaconda3/envs/openllm
```
</details>
<details><summary><code>pip_packages</code></summary>
<br>
```
accelerate==0.24.1
aiohttp==3.9.0
aiosignal==1.3.1
anyio==3.7.1
appdirs==1.4.4
asgiref==3.7.2
attrs==23.1.0
beautifulsoup4==4.12.2
bentoml==1.1.10
bitsandbytes==0.41.2.post2
build==0.10.0
cattrs==23.1.2
certifi==2023.11.17
charset-normalizer==3.3.2
circus==0.18.0
click==8.1.7
click-option-group==0.5.6
cloudpickle==3.0.0
coloredlogs==15.0.1
contextlib2==21.6.0
cuda-python==12.3.0
dataclasses-json==0.6.2
datasets==2.15.0
deepmerge==1.1.0
Deprecated==1.2.14
dill==0.3.7
distlib==0.3.7
distro==1.8.0
einops==0.7.0
fastapi==0.104.1
fastcore==1.5.29
filelock==3.13.1
filetype==1.2.0
frozenlist==1.4.0
fs==2.4.16
fsspec==2023.10.0
ghapi==1.0.4
greenlet==3.0.1
h11==0.14.0
httpcore==1.0.2
httptools==0.6.1
httpx==0.25.2
huggingface-hub==0.19.4
humanfriendly==10.0
idna==3.6
importlib-metadata==6.8.0
inflection==0.5.1
Jinja2==3.1.2
jsonpatch==1.33
jsonpointer==2.4
jsonschema==4.20.0
jsonschema-specifications==2023.11.1
langchain==0.0.340
langsmith==0.0.66
markdown-it-py==3.0.0
MarkupSafe==2.1.3
marshmallow==3.20.1
mdurl==0.1.2
mpmath==1.3.0
msgpack==1.0.7
multidict==6.0.4
multiprocess==0.70.15
mypy-extensions==1.0.0
networkx==3.2.1
ninja==1.11.1.1
numpy==1.26.2
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-ml-py==11.525.150
nvidia-nccl-cu12==2.18.1
nvidia-nvjitlink-cu12==12.3.101
nvidia-nvtx-cu12==12.1.105
openllm==0.4.28
openllm-client==0.4.28
openllm-core==0.4.28
opentelemetry-api==1.20.0
opentelemetry-instrumentation==0.41b0
opentelemetry-instrumentation-aiohttp-client==0.41b0
opentelemetry-instrumentation-asgi==0.41b0
opentelemetry-sdk==1.20.0
opentelemetry-semantic-conventions==0.41b0
opentelemetry-util-http==0.41b0
optimum==1.14.1
orjson==3.9.10
packaging==23.2
pandas==2.1.3
pathspec==0.11.2
Pillow==10.1.0
pip-requirements-parser==32.0.1
pip-tools==7.3.0
platformdirs==4.0.0
prometheus-client==0.19.0
protobuf==4.25.1
psutil==5.9.6
pyarrow==14.0.1
pyarrow-hotfix==0.6
pydantic==1.10.13
Pygments==2.17.2
pyparsing==3.1.1
pyproject_hooks==1.0.0
python-dateutil==2.8.2
python-dotenv==1.0.0
python-json-logger==2.0.7
python-multipart==0.0.6
pytz==2023.3.post1
PyYAML==6.0.1
pyzmq==25.1.1
ray==2.8.0
referencing==0.31.0
regex==2023.10.3
requests==2.31.0
rich==13.7.0
rpds-py==0.13.1
safetensors==0.4.0
schema==0.7.5
scipy==1.11.4
sentencepiece==0.1.99
simple-di==0.1.5
six==1.16.0
sniffio==1.3.0
soupsieve==2.5
SQLAlchemy==2.0.23
starlette==0.27.0
sympy==1.12
tenacity==8.2.3
tokenizers==0.15.0
torch==2.1.0
tornado==6.3.3
tqdm==4.66.1
transformers==4.35.2
triton==2.1.0
typing-inspect==0.9.0
typing_extensions==4.8.0
tzdata==2023.3
urllib3==2.1.0
uvicorn==0.24.0.post1
uvloop==0.19.0
virtualenv==20.24.7
vllm==0.2.2
watchfiles==0.21.0
websockets==12.0
wrapt==1.16.0
xformers==0.0.22.post7
xxhash==3.4.1
yarl==1.9.3
zipp==3.17.0
```
</details>
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am following the example and wrote the code:
```
llm = OpenLLM(server_url=server_url, server_type='http')
llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")
```
Seems like First request is hitting the server
```
(scheme=http,method=POST,path=/v1/metadata,type=application/json,length=2) (status=200
```
Till now its look promising, but then I am getting error ```TypeError: 'dict' object is not callable```.
As attached in the trace:
```
Traceback (most recent call last):
File "/home/lolevsky/Github/Zodiac/main.py", line 24, in <module>
run_zodiac()
File "/home/lolevsky/Github/Zodiac/main.py", line 9, in run_zodiac
resA = llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")
File "/usr/local/lib/python3.10/dist-packages/langchain/llms/base.py", line 876, in __call__
self.generate(
File "/usr/local/lib/python3.10/dist-packages/langchain/llms/base.py", line 626, in generate
params = self.dict()
File "/usr/local/lib/python3.10/dist-packages/langchain/llms/base.py", line 974, in dict
starter_dict = dict(self._identifying_params)
File "/usr/local/lib/python3.10/dist-packages/langchain/llms/openllm.py", line 220, in _identifying_params
self.llm_kwargs.update(self._client._config())
TypeError: 'dict' object is not callable
```
### To reproduce
This is how I had setup the envirment:
- conda create --name openllm python=3.11
- conda activate openllm
- pip install openllm
- pip install langchain
### Expected behavior
Should not get errors, should hit the server for prompting | bug: When running by example getting error: TypeError: 'dict' object is not callable | https://api.github.com/repos/langchain-ai/langchain/issues/13867/comments | 4 | 2023-11-26T08:25:23Z | 2024-04-15T16:07:35Z | https://github.com/langchain-ai/langchain/issues/13867 | 2,010,950,164 | 13,867 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Hi, can we get more documentation on `langchain_experimental.rl_chain`? I'm having trouble wrapping my head around how it works, and the documentation is sparse.
From the notebook intro, originally I thought it was going to tune the human written prompt template and then output a new and improved prompt template that it found was better. However it seems to be doing something else.
### Idea or request for content:
_No response_ | DOC: How langchain_experimental.rl_chain works | https://api.github.com/repos/langchain-ai/langchain/issues/13865/comments | 3 | 2023-11-26T06:33:13Z | 2024-03-13T19:55:36Z | https://github.com/langchain-ai/langchain/issues/13865 | 2,010,911,639 | 13,865 |
[
"hwchase17",
"langchain"
] | ### System Info
RTX 3090
```
Here is notebook for reference: https://colab.research.google.com/drive/1Rwdrji34CV4QJofVl9jAT7-EwodvphA4?usp=sharing
```
### Who can help?
@agola11 @ey
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
!wget -O /content/models/ggml-model-f16.gguf https://huggingface.co/mys/ggml_llava-v1.5-7b/resolve/main/ggml-model-f16.gguf
!wget -O /content/models/ggml-model-q5_k.gguf https://huggingface.co/mys/ggml_llava-v1.5-7b/resolve/main/ggml-model-q5_k.gguf
```
```
%%bash
# Define the directory containing the images
IMG_DIR=/content/LLAVA/
# Loop through each image in the directory
for img in "${IMG_DIR}"*.jpg; do
# Extract the base name of the image without extension
base_name=$(basename "$img" .jpg)
# Define the output file name based on the image name
output_file="${IMG_DIR}${base_name}.txt"
# Execute the command and save the output to the defined output file
/content/llama.cpp/bin/llava -m /content/models/ggml-model-q5_k.gguf --mmproj /content/models//mmproj-model-f16.gguf --temp 0.1 -p "Describe the image in detail. Be specific about graphs, such as bar plots." --image "$img" > "$output_file"
done
```
gives error:
```
bash: line 14: /content/llama.cpp/bin/llava: No such file or directory
bash: line 14: /content/llama.cpp/bin/llava: No such file or directory
bash: line 14: /content/llama.cpp/bin/llava: No such file or directory
bash: line 14: /content/llama.cpp/bin/llava: No such file or directory
bash: line 14: /content/llama.cpp/bin/llava: No such file or directory
bash: line 14: /content/llama.cpp/bin/llava: No such file or directory
bash: line 14: /content/llama.cpp/bin/llava: No such file or directory
bash: line 14: /content/llama.cpp/bin/llava: No such file or directory
bash: line 14: /content/llama.cpp/bin/llava: No such file or directory
bash: line 14: /content/llama.cpp/bin/llava: No such file or directory
---------------------------------------------------------------------------
CalledProcessError Traceback (most recent call last)
[<ipython-input-51-e049cdfbb7ce>](https://localhost:8080/#) in <cell line: 1>()
----> 1 get_ipython().run_cell_magic('bash', '', '\n# Define the directory containing the images\nIMG_DIR=/content/LLAVA/\n\n# Loop through each image in the directory\nfor img in "${IMG_DIR}"*.jpg; do\n # Extract the base name of the image without extension\n base_name=$(basename "$img" .jpg)\n\n # Define the output file name based on the image name\n output_file="${IMG_DIR}${base_name}.txt"\n\n # Execute the command and save the output to the defined output file\n /content/llama.cpp/bin/llava -m /content/models/ggml-model-q5_k.gguf --mmproj /content/models//mmproj-model-f16.gguf --temp 0.1 -p "Describe the image in detail. Be specific about graphs, such as bar plots." --image "$img" > "$output_file"\n\ndone\n')
4 frames
<decorator-gen-103> in shebang(self, line, cell)
[/usr/local/lib/python3.10/dist-packages/IPython/core/magics/script.py](https://localhost:8080/#) in shebang(self, line, cell)
243 sys.stderr.flush()
244 if args.raise_error and p.returncode!=0:
--> 245 raise CalledProcessError(p.returncode, cell, output=out, stderr=err)
246
247 def _run_script(self, p, cell, to_close):
CalledProcessError: Command 'b'\n# Define the directory containing the images\nIMG_DIR=/content/LLAVA/\n\n# Loop through each image in the directory\nfor img in "${IMG_DIR}"*.jpg; do\n # Extract the base name of the image without extension\n base_name=$(basename "$img" .jpg)\n\n # Define the output file name based on the image name\n output_file="${IMG_DIR}${base_name}.txt"\n\n # Execute the command and save the output to the defined output file\n /content/llama.cpp/bin/llava -m /content/models/ggml-model-q5_k.gguf --mmproj /content/models//mmproj-model-f16.gguf --temp 0.1 -p "Describe the image in detail. Be specific about graphs, such as bar plots." --image "$img" > "$output_file"\n\ndone\n'' returned non-zero exit status 127.
```
### Expected behavior
it should run , but do not understand
```
from langchain.llms import LlamaCpp
llm = LlamaCpp(model_path="/path/to/llama/model")
``` | CalledProcessError: bash command for LLAVA in Multimodal giving error | https://api.github.com/repos/langchain-ai/langchain/issues/13863/comments | 3 | 2023-11-26T02:43:06Z | 2024-03-13T20:01:26Z | https://github.com/langchain-ai/langchain/issues/13863 | 2,010,867,450 | 13,863 |
[
"hwchase17",
"langchain"
] | ### System Info
Hello,
I am trying to use a Baseten base LLM in a RAG pipeline.
```
from operator import itemgetter
from langchain.llms import Baseten
from langchain.schema.runnable import RunnableMap
llm = Baseten(model="MODEL_ID", verbose=True)
rag_chain_from_docs = (
{
"context": lambda input: format_docs(input["documents"]),
"question": itemgetter("question"),
}
| rag_prompt_custom
| llm
| StrOutputParser()
)
rag_chain_with_source = RunnableMap(
{"documents": retriever, "question": RunnablePassthrough()}
) | {
"documents": lambda input: [doc.metadata for doc in input["documents"]],
"answer": rag_chain_from_docs,
}
rag_chain_with_source.invoke("What is Task Decomposition")
```
I am using a FAISS retriever and I am getting the following error on the `.invoke()` method:
```
File "/Users/usr/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/llms/baseten.py", line 69, in _call
response = model.predict({"prompt": prompt, **kwargs})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/usr/miniconda3/envs/langchain/lib/python3.11/site-packages/baseten/common/core.py", line 67, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/usr/miniconda3/envs/langchain/lib/python3.11/site-packages/baseten/baseten_deployed_model.py", line 124, in predict
raise TypeError('predict can be called with either a list, a pandas DataFrame, or a numpy array.')
TypeError: predict can be called with either a list, a pandas DataFrame, or a numpy array.
```
It seems the `model.predict()` method is expecting a list. Does anyone already encountered this error?
Thank you in advance !
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from operator import itemgetter
from langchain.llms import Baseten
from langchain.schema.runnable import RunnableMap
llm = Baseten(model="MODEL_ID", verbose=True)
rag_chain_from_docs = (
{
"context": lambda input: format_docs(input["documents"]),
"question": itemgetter("question"),
}
| rag_prompt_custom
| llm
| StrOutputParser()
)
rag_chain_with_source = RunnableMap(
{"documents": retriever, "question": RunnablePassthrough()}
) | {
"documents": lambda input: [doc.metadata for doc in input["documents"]],
"answer": rag_chain_from_docs,
}
rag_chain_with_source.invoke("What is Task Decomposition")
```
### Expected behavior
It seems the `model.predict()` method is expecting a list. Could you fix this issue? | TypeError using Baseten in a RAG | https://api.github.com/repos/langchain-ai/langchain/issues/13861/comments | 1 | 2023-11-25T23:39:12Z | 2024-03-13T20:02:45Z | https://github.com/langchain-ai/langchain/issues/13861 | 2,010,829,835 | 13,861 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
```# Load Tools
tools = load_tools(["serpapi","langchain_experimental_python_repl"], llm=llm)
```
error
```
Exception has occurred: ImportError
This tool has been moved to langchain experiment. This tool has access to a python REPL. For best practices make sure to sandbox this tool. Read To keep using this code as is, install langchain experimental and update relevant imports replacing 'langchain' with 'langchain_experimental'
File "/home/isayahc/projects/buy-bot/react_agent.py", line 49, in create_agent_executor
tools = load_tools(["serpapi","python_repl"], llm=llm)
File "/home/isayahc/projects/buy-bot/react_agent.py", line 88, in <module>
agent_executor = create_agent_executor()
ImportError: This tool has been moved to langchain experiment. This tool has access to a python REPL. For best practices make sure to sandbox this tool. Read h To keep using this code as is, install langchain experimental and update relevant imports replacing 'langchain' with 'langchain_experimental'
```
how do i fix this?
### Suggestion:
_No response_ | Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/13859/comments | 1 | 2023-11-25T23:18:35Z | 2023-11-26T01:05:46Z | https://github.com/langchain-ai/langchain/issues/13859 | 2,010,826,017 | 13,859 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Exception has occurred: ImportError
This tool has been moved to langchain experiment. This tool has access to a python REPL. For best practices make sure to sandbox this tool. Read https://github.com/langchain-ai/langchain/blob/master/SECURITY.md To keep using this code as is, install langchain experimental and update relevant imports replacing 'langchain' with 'langchain_experimental'
File "/home/isayahc/projects/buy-bot/react_agent.py", line 43, in create_agent_executor
tools = load_tools(["serpapi","python_repl"], llm=llm)
File "/home/isayahc/projects/buy-bot/react_agent.py", line 81, in
agent_executor = create_agent_executor()
ImportError: This tool has been moved to langchain experiment. This tool has access to a python REPL. For best practices make sure to sandbox this tool. Read https://github.com/langchain-ai/langchain/blob/master/SECURITY.md To keep using this code as is, install langchain experimental and update relevant imports replacing 'langchain' with 'langchain_experimental'
I am trying to use the python repl, and load it to my tools
### Suggestion:
_No response_ | Issue: Load tools from experimental langchain module | https://api.github.com/repos/langchain-ai/langchain/issues/13858/comments | 1 | 2023-11-25T22:57:56Z | 2024-03-13T19:57:32Z | https://github.com/langchain-ai/langchain/issues/13858 | 2,010,821,824 | 13,858 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Exception has occurred: ImportError
This tool has been moved to langchain experiment. This tool has access to a python REPL. For best practices make sure to sandbox this tool. Read https://github.com/langchain-ai/langchain/blob/master/SECURITY.md To keep using this code as is, install langchain experimental and update relevant imports replacing 'langchain' with 'langchain_experimental'
File "/home/isayahc/projects/buy-bot/react_agent.py", line 43, in create_agent_executor
tools = load_tools(["serpapi","python_repl"], llm=llm)
File "/home/isayahc/projects/buy-bot/react_agent.py", line 81, in <module>
agent_executor = create_agent_executor()
ImportError: This tool has been moved to langchain experiment. This tool has access to a python REPL. For best practices make sure to sandbox this tool. Read https://github.com/langchain-ai/langchain/blob/master/SECURITY.md To keep using this code as is, install langchain experimental and update relevant imports replacing 'langchain' with 'langchain_experimental'
### Suggestion:
_No response_ | Issue: what string works for experimental tool | https://api.github.com/repos/langchain-ai/langchain/issues/13856/comments | 3 | 2023-11-25T22:34:55Z | 2023-11-25T22:56:58Z | https://github.com/langchain-ai/langchain/issues/13856 | 2,010,817,039 | 13,856 |
[
"hwchase17",
"langchain"
] | ### System Info
Python 3.9.18
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Hi i try this example, but doesnt work
```
from llamaapi import LlamaAPI
from langchain.chains import create_extraction_chain
llama = LlamaAPI("My Api KEy")
from langchain_experimental.llms import ChatLlamaAPI
model = ChatLlamaAPI(client=llama)
schema = {
"properties": {
"name": {"type": "string"},
"height": {"type": "integer"},
"hair_color": {"type": "string"},
},
"required": ["name", "height"],
}
inp = """
Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.
"""
chain = create_extraction_chain(schema, model)
chain.run(inp)
```
File "C:\mini\envs\py39\lib\site-packages\langchain\output_parsers\openai_func
tions.py", line 136, in parse_result
return res.get(self.key_name) if partial else res[self.key_name]
TypeError: string indices must be integers
### Expected behavior
Extraction from text | create_extraction_chain does not work with other LLMs?i try with llama_api | https://api.github.com/repos/langchain-ai/langchain/issues/13847/comments | 4 | 2023-11-25T13:01:51Z | 2024-03-17T16:06:32Z | https://github.com/langchain-ai/langchain/issues/13847 | 2,010,605,909 | 13,847 |
[
"hwchase17",
"langchain"
] | ### System Info
I am write a code and i want to add history to my langchain agent. History is present in chats list
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [x] Memory
- [x] Agents / Agent Executors
- [x] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`async def chat_with_agent(user_input, formatting_data, chats: list):
"""
Initiates a chat with the agent based on the user input.
"""
try:
# Initialize the chat model
llm_model = "gpt-4-1106-preview"
llm = ChatOpenAI(temperature=0.3, model=llm_model)
# Load necessary tools
tool = StructuredTool.from_function(get_human_input)
tools = load_tools(["serpapi"], llm=llm)
tools_list = [tool, exposure, get_user_profile, get_user_risk_profile, get_stock_technical_analysis,
get_stock_fundamental_analysis, get_mutual_fund_exposure, get_stock_based_news,
user_agent_chat_history]
# Initialize the agent
agent = initialize_agent(
tools + tools_list,
llm,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True,
verbose=True,
max_execution_time=1800,
max_iterations=300,
agent_kwargs={
'prefix': "Answer the questions as best you can. Use tools when needed. First task is check user "
"If the query requires and domain specific experts and mention it in response there are other"
"experts in system like stock expert, tax expert, mutual fund expert "
"First: you task is to answer only financial question only"
},
return_intermediate_steps=True
)
# Add additional prompt
extra_prompt = ("You are Relationship Manager. All values are in Indian Rupees. Answers or tasks always lie "
"in the capacity of the tools. So ensure you are not expecting anything outside of it."
)
final_input = "This is user input " + user_input + " This is helping prompt " + extra_prompt
try:
logger.info(f"User input + extra prompt: {user_input + extra_prompt}")
# Run the agent
result = agent(final_input)
except Exception as e:
logger.exception(f"Error while running the agent: {e}")
result = str(e)
logger.info(f"Agent chat result: {result['output']}")
response = personalised_response_from_ai(final_input, str(result['output']), str(result["intermediate_steps"]),
formatting_data)
"""report = report_writing_tool(user_input, str(result['output']), str(result["intermediate_steps"]))"""
logger.info(f"Response from GPT: {response}")
# return f" {response}, Report: {report}"
if response:
return response
else:
return str(result['output'])
except Exception as e:
logger.error(f"Error while talking with RM Agent: {str(e)}")
raise HTTPException(status_code=500, detail=str(e))`
### Expected behavior
I want add history to my agent | Add memeory to langchain AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION agent | https://api.github.com/repos/langchain-ai/langchain/issues/13845/comments | 2 | 2023-11-25T12:05:33Z | 2024-03-13T19:55:41Z | https://github.com/langchain-ai/langchain/issues/13845 | 2,010,590,203 | 13,845 |
[
"hwchase17",
"langchain"
] | ### System Info
None
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
My chat model is ERNIE-Bot, running the following code reported an error:

But after I removed SystemMessage it works fine. So I want to know, do all models support SystemMessage?

### Expected behavior
None | Does only openai support SystemMessage? | https://api.github.com/repos/langchain-ai/langchain/issues/13842/comments | 1 | 2023-11-25T08:57:09Z | 2024-03-13T20:00:32Z | https://github.com/langchain-ai/langchain/issues/13842 | 2,010,536,775 | 13,842 |
[
"hwchase17",
"langchain"
] | ### System Info
im running it on google Collab
### Who can help?
trying the example of mult-modal rag - I tried everything no matter what if still getting this error
please tell if if there is any alternative way or how can we install it? @bas
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
just run it on collab & we will not able to get output from partitions
https://github.com/langchain-ai/langchain/blob/master/cookbook/Multi_modal_RAG.ipynb

### Expected behavior
it should work normal without error on collab | Unable to get page count. Is poppler installed and in PATH | https://api.github.com/repos/langchain-ai/langchain/issues/13838/comments | 3 | 2023-11-25T06:31:04Z | 2024-07-03T17:15:16Z | https://github.com/langchain-ai/langchain/issues/13838 | 2,010,498,080 | 13,838 |
[
"hwchase17",
"langchain"
] | Please help , I am blocked on it from many days .
**I am trying to filter question of answer from pdf doc based on uploaded email filter . Only if email match happen answer the question from that document else just empty .
I tried following code its not working rather even in case of OTHER email id also it give answer from my email uploaded document which is wrong , seems like filtering not working .**
First i tried by putting email as metadata it did not work then i added as independent email field its not working there either .
I am instantiating vector store like below , it has email field in it . I am able to create and successfully able to upload pdf doc with all the fields including email is populated . I checked created index and email field is filterable , searchable .
- Creating azure instance as below :
`
`self.vector_store = AzureSearch(azure_search_endpoint=endpoint, azure_search_key=admin_key, index_name=index_name, embedding_function=embedding_function,fields=fields)`
- After doc uploaded as shown below it has email in it .
```
"@odata.context": "https://ops-documents.search.windows.net/indexes('index-new-5')/$metadata#docs(*)",
"value": [
{
"@search.score": 1,
"id": "NjQ1YWViNWQtNDJkNy00NTcxLTlkMTktMDIzZTc0NTZlNDhm",
"content": "ICAO TRIP 2023 – .",
"metadata": "{\"source\": \"7-pl\", \"page\": 7, \"file_id\": \"65612fd773a9aa51a0939c96\", \"upload_document_name\": \"SITA Lab Furhat Backoffice IMM officer Dialog - v1.pdf\", \"email\": \"[email protected]\", \"company\": \"sita\"}",
"email": "[email protected]"
```
-
- Using following way to search :
```
my_retriever = self.vector_store.as_retriever(search_kwargs={'filter': { 'email': email }})
qa = RetrievalQA.from_chain_type(llm=self.llm,chain_type="stuff",retriever=my_retriever,chain_type_kwargs={"prompt": self.PROMPT},
return_source_documents=True,
)
results = qa({"query": question})
```
```
Installations using as :
**I am using OPENAI_API_VERSION="2023-10-01-preview" with gpt4 model**
azure-common==1.1.28
azure-core==1.29.5
azure-identity==1.15.0
azure-search==1.0.0b2
azure-search-documents==11.4.0b8
langchain==0.0.326
```
Kindly let me know if anything needed from me . Thanks so much for your help . | lang chain Azure vector search not working neither on its direct fields nor on its metadata fields | https://api.github.com/repos/langchain-ai/langchain/issues/13833/comments | 4 | 2023-11-25T00:45:36Z | 2024-05-08T22:59:31Z | https://github.com/langchain-ai/langchain/issues/13833 | 2,010,391,830 | 13,833 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
Hi I am trying to implement memory in an RAG agent and am following documentation, but I get the following error:
ValueError: variable chat_history should be a list of base messages, got
It seems that I should be passing in a chat_history, but all the notebook examples I have seen only pass the question. I have seen some implementations use initialize_agent(), while others use AgentExecutor(). Any help on how to implement memory with the agent would be greatly appreciated.
this is my implementation:
```from langchain.vectorstores import Pinecone
from langchain.llms import Cohere
from langchain.retrievers.multi_query import MultiQueryRetriever
from langchain.prompts import PromptTemplate
import logging
from langchain.chains import LLMChain, ConversationChain
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
from langchain.chains import RetrievalQA
from langchain.agents import Tool
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import CohereRerank
index = pinecone.Index(index_name)
vectorstore = Pinecone(index, embeddings, "text")
llm = Cohere(cohere_api_key=cohere_api_key)
retriever_from_llm = MultiQueryRetriever.from_llm(
retriever=vectorstore.as_retriever(search_kwargs={"k": 10,
'filter': {'user_id_str': '42',
'internal_mydocai_id_str': {"$in":["4", "5"]}}}), llm=llm
)
from langchain.prompts import PromptTemplate
QUERY_PROMPT = PromptTemplate(
input_variables=["question"],
template="""You are an AI language model assistant. Your task is to output the original query and four
different versions of the given user query to retrieve relevant documents from a vector
database. By generating multiple perspectives on the user question, your goal is to help
the user overcome some of the limitations of the distance-based similarity search, while staying in the scope of the original question.
Provide the original query and the alternative questions separated by newlines. Do not output anything else.
Original question: {question}""",
)
logging.basicConfig()
logging.getLogger("langchain.retrievers.multi_query").setLevel(logging.INFO)
compressor = CohereRerank(model= "rerank-multilingual-v2.0", cohere_api_key=cohere_api_key, client=co, user_agent="mydocument", top_n=5
)
compression_retriever = ContextualCompressionRetriever(
base_compressor=compressor, base_retriever=retriever_from_llm
)
compressed_docs = compression_retriever.get_relevant_documents(
question
memory = ConversationBufferWindowMemory(k=2, memory_key="chat_history", input_key='input', output_key="output")
# retrieval qa chain
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=compression_retriever,
)
tools = [
Tool(
name='Knowledge Base',
func=qa.run,
description=(
'use this tool when answering general knowledge queries to get '
'more information about the topic'
)
)
]
agent = initialize_agent(
agent='chat-conversational-react-description',
tools=tools,
llm=llm,
verbose=True,
max_iterations=3,
early_stopping_method='generate',
memory=memory
)
agent(question)
```
Error:
```
ValueError: variable chat_history should be a list of base messages, got
### Suggestion:
_No response_ | Unable to implement memory in RAG agent, asking for chat_history | https://api.github.com/repos/langchain-ai/langchain/issues/13830/comments | 1 | 2023-11-25T00:31:32Z | 2024-03-13T20:02:47Z | https://github.com/langchain-ai/langchain/issues/13830 | 2,010,379,993 | 13,830 |
[
"hwchase17",
"langchain"
] | I am using local LLM with langchain: openhermes-2.5-mistral-7b.Q8_0.gguf
When using database agent this is how I am initiating things:
`db = SQLDatabase.from_uri(sql_uri)
model_path = "./openhermes-2.5-mistral-7b.Q8_0.gguf"
n_gpu_layers = 1 # Change this value based on your model and your GPU VRAM pool.
n_batch = 512 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.
n_ctx=50000
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm = LlamaCpp(
model_path=model_path,
#temperature=0,
n_gpu_layers=n_gpu_layers, n_batch=n_batch,
n_ctx=n_ctx,
callback_manager=callback_manager,
verbose=True,
)
#toolkit = CustomSQLDatabaseToolkit(db=db, llm=llm)
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
toolkit.get_tools()
PREFIX = '''You are a SQL expert. You have access to a Microsoft SQL Server database.
Identify which tables can be used to answer the user's question and write and execute a SQL query accordingly.
'''
FORMAT_INSTRUCTIONS = """RESPONSE FORMAT INSTRUCTIONS
----------------------------
When responding please, please output a response in this format:
thought: Reason about what action to take next, and whether to use a tool.
action: The tool to use. Must be one of: {tool_names}
action_input: The input to the tool
For example:
thought: I need to get all tables from database
action: sql_db_list_tables
action_input: Empty string
"""
agent_executor = create_sql_agent(
llm=llm,
toolkit=toolkit,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
handle_parsing_errors=True,
agent_kwargs={
'prefix': PREFIX,
'format_instructions': FORMAT_INSTRUCTIONS,
}
)
now = datetime.datetime.now()
print("Starting executor : ")
print(now.strftime("%Y-%m-%d %H:%M:%S"))
agent_executor.run("Who is oldest user")`
When Entering chain I usually get error "Could not parse LLM output" as despite the instructions Action Input part is not created by LLM.
> Entering new AgentExecutor chain...
Action: sql_db_list_tables
Traceback (most recent call last):
File "/Users/dino/Codings/python/LLM_test1/.venv/lib/python3.9/site-packages/langchain/agents/agent.py", line 1032, in _take_next_step
output = self.agent.plan(
File "/Users/dino/Codings/python/LLM_test1/.venv/lib/python3.9/site-packages/langchain/agents/agent.py", line 636, in plan
return self.output_parser.parse(full_output)
File "/Users/dino/Codings/python/LLM_test1/.venv/lib/python3.9/site-packages/langchain/agents/mrkl/output_parser.py", line 70, in parse
raise OutputParserException(
langchain.schema.output_parser.OutputParserException: Could not parse LLM output: `Action: sql_db_list_tables`
Any Idea how to fix this ? | Langchain Database Agent with local LLM | https://api.github.com/repos/langchain-ai/langchain/issues/13826/comments | 5 | 2023-11-24T21:49:25Z | 2024-03-04T11:57:39Z | https://github.com/langchain-ai/langchain/issues/13826 | 2,010,243,906 | 13,826 |
[
"hwchase17",
"langchain"
] | ### System Info
Hello! I got this error while trying to run code from [docs](https://python.langchain.com/docs/integrations/tools/dalle_image_generator).
I have Python 3.11.3, openai 1.3.5 and langchain 0.0.340 .
```
You tried to access openai.Image, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.
You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface.
Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`
A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Write code from [docs](https://python.langchain.com/docs/integrations/tools/dalle_image_generator) and run it using python3.
```python
from langchain.chains import LLMChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.utilities.dalle_image_generator import DallEAPIWrapper
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
input_variables=["image_desc"],
template="Generate a detailed prompt to generate an image based on the following description: {image_desc}",
)
chain = LLMChain(llm=llm, prompt=prompt)
image_url = DallEAPIWrapper().run(chain.run("halloween night at a haunted museum"))
print(image_url)
```
### Expected behavior
get an image url | getting an error with DallEAPIWrapper | https://api.github.com/repos/langchain-ai/langchain/issues/13825/comments | 3 | 2023-11-24T20:46:33Z | 2024-03-13T20:03:04Z | https://github.com/langchain-ai/langchain/issues/13825 | 2,010,209,501 | 13,825 |
[
"hwchase17",
"langchain"
] | ### Issue you'd like to raise.
My ChatOpenAI usually takes a response time of 1000 ms. I want the model to switch to either GooglePalm or some other language model when the response time of ChatOpenAI is large. Is it possible?
### Suggestion:
_No response_ | Is it possible to switch language models if the ms in the first model is large? | https://api.github.com/repos/langchain-ai/langchain/issues/13821/comments | 2 | 2023-11-24T17:25:34Z | 2024-03-13T19:56:31Z | https://github.com/langchain-ai/langchain/issues/13821 | 2,010,029,887 | 13,821 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.