issue_owner_repo
sequencelengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
from langchain_google_vertexai import ChatVertexAI
from langchain_core.prompts import ChatPromptTemplate
llm = ChatVertexAI(
model_name="gemini-pro",
project="my_project",
convert_system_message_to_human=True,
)
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
(
"You are a system that translates each input from english to german."
),
),
("user", "{question}"),
]
)
chain = prompt | llm
answer = chain.invoke({"question": "Hello, how are you?"})
print(answer)
```
This code raises the exception
```
...
env/lib/python3.11/site-packages/langchain_google_vertexai/chat_models.py", line 167, in _parse_chat_history_gemini
raise ValueError(
ValueError: SystemMessages are not yet supported!
To automatically convert the leading SystemMessage to a HumanMessage,
set `convert_system_message_to_human` to True. Example:
llm = ChatVertexAI(model_name="gemini-pro", convert_system_message_to_human=True)
```
### Description
The problem is, that the check in langchain_google_vertexai/chat_models.py line 165 always evaluates to True, independent on whether convert_system_message_to_human is True or False:
```python
[...]
for i, message in enumerate(history):
if (
i == 0
and isinstance(message, SystemMessage)
and not convert_system_message_to_human,
):
```
As can be seen, the ',' after `and not convert_system_message_to_human` must be removed, otherwise its interpreted as a tuple and evaluates to `True` always, independent on whether `convert_system_message_to_human` is True or False.
### System Info
langchain==0.1.3
langchain-community==0.0.15
langchain-core==0.1.15
langchain-google-vertexai==0.0.2
langchain-openai==0.0.3
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | ChatVertexAI model - convert_system_message_to_human argument = True is ignored | https://api.github.com/repos/langchain-ai/langchain/issues/16503/comments | 4 | 2024-01-24T11:13:09Z | 2024-02-19T08:35:34Z | https://github.com/langchain-ai/langchain/issues/16503 | 2,098,035,111 | 16,503 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
In the Metal section of the https://python.langchain.com/docs/integrations/llms/llamacpp document, the description of `n_gpu_layers` is `Metal set to 1 is enough.`
I haven't found the exact reason for this. and when I tested it locally, I felt that using a larger value of `n_gpu_layers` would significantly improve the execution speed. I have a complete ipynb file here: https://github.com/169/ai-snippets/blob/main/llama-cpp.ipynb
Here I explain why I came to this conclusion.
First, use the main compiled by `llama.cpp` to perform inference. You can see that by default, all 33 layers are offloaded to the GPU:
```
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: Metal buffer size = 4095.07 MiB
llm_load_tensors: CPU buffer size = 70.31 MiB
```
It’s also fast: about 33 tokens/s (From `total time = 3760.69 ms / 124 tokens`)
But if I use `n_gpu_layers=1`, only one layer is offloaded to the GPU, and the rest is given to the CPU:
```
llm_load_tensors: offloading 1 repeating layers to GPU
llm_load_tensors: offloaded 1/33 layers to GPU
llm_load_tensors: CPU buffer size = 4165.37 MiB
llm_load_tensors: Metal buffer size = 132.51 MiB
```
Much slower: about 18 token/s (From `2435.17 ms / 43 tokens`)
The same condition, changed to `n_gpu_layers=33`, has the same effect as using `./main`:
```
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors: CPU buffer size = 70.31 MiB
llm_load_tensors: Metal buffer size = 4095.07 MiB
```
The speed has also increased to about 31 token/s.
So I think that within the optional range, the larger the value of `n_gpu_layers`, the faster the inference. There are also posts similar to this one with doubts: https://www.reddit.com/r/LangChain/comments/18lb4n4/llamacpp_on_mac_n_gpu_layers_n_batch/
I'm a bit confused, So, I added a [PR](https://github.com/langchain-ai/langchain/pull/16501) to remove this part of the description.
@genewoo I see you added this part, do you have any other context proving that using `n_gpu_layers=1` is a best practice?
### Idea or request for content:
_No response_ | DOC: The description of ·n_gpu_layers· in https://python.langchain.com/docs/integrations/llms/llamacpp#metal is incorrect | https://api.github.com/repos/langchain-ai/langchain/issues/16502/comments | 3 | 2024-01-24T11:03:16Z | 2024-06-08T16:09:20Z | https://github.com/langchain-ai/langchain/issues/16502 | 2,098,016,823 | 16,502 |
[
"hwchase17",
"langchain"
] | ### Feature request
https://www.assemblyai.com/products/lemur
Adding support for lemur endpoints as we already do have AAI integration
### Motivation
Helpful for folks with a paid AAI plan and are using lemur to migrate the codebase to langchain
### Your contribution
Perhaps. | Add support for Assembly AI Lemur | https://api.github.com/repos/langchain-ai/langchain/issues/16496/comments | 1 | 2024-01-24T06:44:02Z | 2024-05-01T16:07:09Z | https://github.com/langchain-ai/langchain/issues/16496 | 2,097,539,943 | 16,496 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
from langchain.agents.agent_toolkits import create_conversational_retrieval_agent
from langchain_openai import ChatOpenAI
from langchain_community.vectorstores import Chroma
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.agents.agent_toolkits import create_retriever_tool
from langchain.schema.messages import SystemMessage
embedding = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
vectordb = Chroma(persist_directory=persist_directory, embedding_function=embedding)
retriever = vectordb.as_retriever()
tool = create_retriever_tool(
retriever,
"search_docs",
"Searches and returns documents."
)
tools = [tool]
llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo-1106")
system_msg = SystemMessage(content=system_msg_txt)
agent_executor = create_conversational_retrieval_agent(
llm, tools,
system_message=system_msg,
max_token_limit=6000,
remember_intermediate_steps=True,
verbose=True)
response = agent_executor.invoke({"input": "What did the president say about Ketanji Brown Jackson?"})
print(response)
```
### Description
There seems to be a change of output between v0.1.0 and v>=0.1.1
Question, is v0.1.1 the correct one behavior moving forward?
The FunctionMessage['content'] is:
- Document(..) in v==0.1.0
- Just text like, 'these\tcategories\tent...` in v>=0.1.1
See below.
ver 0.1.0
=========
{'input': 'What did the president say about Ketanji Brown Jackson?',
'chat_history': [
HumanMessage(content='What did the president say about Ketanji Brown Jackson?'),
AIMessage(content='', additional_kwargs={'function_call': {'arguments': '{"query":"president say about Ketanji Brown Jackson"}', 'name': 'search_docs'}}),
FunctionMessage(**content="[
Document(page_content='these\\tcategories\\tentailed\\ta\\tduty,....\\tpoint', metadata={'page': 2, 'source': 'media/statofunion.txt'}),
Document(page_content='fourth:\\tings\\tof\\tpleasure\\tand\\trapture,...\\tthe', metadata={'page': 5, 'source': 'media/statofunion.txt'})]**', name='search_docs'),
AIMessage(content='...')],
'output': '...'
'intermediate_steps':
...
}
ver>= 0.1.1
===========
{'input': 'What did the president say about Ketanji Brown Jackson?',
'chat_history': [
HumanMessage(content='What did the president say about Ketanji Brown Jackson?'),
AIMessage(content='', additional_kwargs={'function_call': {'arguments': '{"query":"president say about Ketanji Brown Jackson"}', 'name': 'search_docs'}}),
FunctionMessage(**content='these\tcategories\tentailed\ta\tduty,...**', name='search_docs'),
AIMessage(content='The president...')],
'output': '...',
'intermediate_steps':
...
}
### System Info
V0.1.0:
====
langchain==0.1.0
langchain-community==0.0.14
langchain-core==0.1.15
langchain-openai==0.0.3
v0.1.1
====
langchain==0.1.1
langchain-community==0.0.14
langchain-core==0.1.14
langchain-openai==0.0.3
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | FunctionMessage['content'] is different between v==0.1.0 and v>=0.1.1; which is the correct one? | https://api.github.com/repos/langchain-ai/langchain/issues/16493/comments | 1 | 2024-01-24T05:25:36Z | 2024-05-05T16:06:32Z | https://github.com/langchain-ai/langchain/issues/16493 | 2,097,453,960 | 16,493 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
import sqlalchemy as sal
import os, sys, openai
import pandas as pd
import constants
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import OpenAI
from langchain_experimental.sql import SQLDatabaseChain
from sqlalchemy import create_engine
from langchain.chat_models import ChatOpenAI
from typing import List, Optional
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.callbacks.manager import CallbackManagerForToolRun
from langchain.chat_models import ChatOpenAI
from langchain_experimental.plan_and_execute import (
PlanAndExecute,
load_agent_executor,
load_chat_planner,
)
from langchain.sql_database import SQLDatabase
from langchain.text_splitter import TokenTextSplitter
from langchain.tools import BaseTool
from langchain.tools.sql_database.tool import QuerySQLDataBaseTool
from secret_key import openapi_key
from langchain.prompts import PromptTemplate
os.environ['OPENAI_API_KEY'] = openapi_key
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
engine = create_engine(connection_uri)
def chat(question, sql_format):
# greetings = ["hi", "hello", "hey"]
# if question.lower() in greetings:
# return "Hello! How can I assist you today?"
model_name="gpt-3.5-turbo-16k"
db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
PROMPT = """
Given an input question, first create a syntactically correct mssql query to run,
then look at the results of the query and return the answer.
Return the answer in a sentence form.
The question: {question}
"""
prompt_template = """
Use the following pieces of context to answer the question at the end. If you don't know the answer,
just say that you don't know, don't try to make up an answer.
{context}
Question: {question}
Helpful Answer:"""
answer = None
if sql_format==False:
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
answer = db_chain.run(PROMPT.format(question=question))
else:
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True , return_sql =True)
sql_query = db_chain.run(question)
print("SQLQuery: "+str(sql_query))
# result = engine.execute(sql_query)
result_df = pd.read_sql(sql_query, engine)
if result_df.empty:
return "No results found"
answer = result_df.to_dict()
def handle_greetings(question):
greetings = ["hi", "hello", "hey"]
if any(greeting in question.lower() for greeting in greetings):
return "Hello! How can I assist you today?"
else:
return None
PROMPT = PromptTemplate(
template=prompt_template,
input_variables=[question],
preprocessor=handle_greetings(question)
)
def split_text(text, chunk_size, chunk_overlap=0):
text_splitter = TokenTextSplitter(
chunk_size=chunk_size, chunk_overlap=chunk_overlap
)
yield from text_splitter.split_text(text)
class QuerySQLDatabaseTool2(QuerySQLDataBaseTool):
def _run(
self,
query: str,
run_manager: Optional[CallbackManagerForToolRun] = None,
) -> str:
result = self.db.run_no_throw(query)
return next(split_text(result, chunk_size=14_000))
class SQLDatabaseToolkit2(SQLDatabaseToolkit):
def get_tools(self) -> List[BaseTool]:
tools = super().get_tools()
original_query_tool_description = tools[0].description
new_query_tool = QuerySQLDatabaseTool2(
db=self.db, description=original_query_tool_description
)
tools[0] = new_query_tool
return tools
# return db_chain.run(question)
return answer
def chain1(question):
text = chat(question,False)
return text
def chain2(question):
query = chat(question,True)
return query
answer=chain1("what is the uan number for AD#######")
print(answer)
### Description
in the chatbot which is connected to the db, when i'm asking question like give UAN number which is not present in the db, instead its fetching euid number, if some particular data is not there it should written as invalid question, it should not execute a wrong queary.
Answer:I'm sorry, but I cannot answer the question "hi" as it is not a valid question. Please provide a specific question related to the data in the table.
> Finished chain.
I'm sorry, but I cannot answer the question "hi" as it is not a valid question. Please provide a specific question related to the data in the table.
> Entering new SQLDatabaseChain chain...
Given an input question, first create a syntactically correct mssql query to run,
then look at the results of the query and return the answer.
Return the answer in a sentence form.
The question: what is the uan number for AD23010923
SQLQuery:SELECT [EmployeeID], [EmployeeName], [EmployeeNameAsPerBank], [EmployeeEuid]
FROM [EGV_emp_departments_ChatGPT]
WHERE [EmployeeID] = 'AD23010923'
SQLResult: [('AD########', '######', 'S######## P', Decimal('######'))]
Answer:The UAN number for AD####### is #####.
> Finished chain.
The UAN number for AD###### is ####.
how to validate each answer before we get the output
for this can we modify the basy.py file in langchain_experimental.sql.base.py, as it seem to be fetching from the db
"""Chain for interacting with SQL Database."""
from __future__ import annotations
import warnings
from typing import Any, Dict, List, Optional
from langchain.callbacks.manager import CallbackManagerForChainRun
from langchain.chains.base import Chain
from langchain.chains.llm import LLMChain
from langchain.chains.sql_database.prompt import DECIDER_PROMPT, PROMPT, SQL_PROMPTS
from langchain.prompts.prompt import PromptTemplate
from langchain.schema import BasePromptTemplate
from langchain.schema.language_model import BaseLanguageModel
from langchain.tools.sql_database.prompt import QUERY_CHECKER
from langchain.utilities.sql_database import SQLDatabase
from langchain_experimental.pydantic_v1 import Extra, Field, root_validator
INTERMEDIATE_STEPS_KEY = "intermediate_steps"
class SQLDatabaseChain(Chain):
"""Chain for interacting with SQL Database.
Example:
.. code-block:: python
from langchain_experimental.sql import SQLDatabaseChain
from langchain.llms import OpenAI, SQLDatabase
db = SQLDatabase(...)
db_chain = SQLDatabaseChain.from_llm(OpenAI(), db)
*Security note*: Make sure that the database connection uses credentials
that are narrowly-scoped to only include the permissions this chain needs.
Failure to do so may result in data corruption or loss, since this chain may
attempt commands like `DROP TABLE` or `INSERT` if appropriately prompted.
The best way to guard against such negative outcomes is to (as appropriate)
limit the permissions granted to the credentials used with this chain.
This issue shows an example negative outcome if these steps are not taken:
https://github.com/langchain-ai/langchain/issues/5923
"""
llm_chain: LLMChain
llm: Optional[BaseLanguageModel] = None
"""[Deprecated] LLM wrapper to use."""
database: SQLDatabase = Field(exclude=True)
"""SQL Database to connect to."""
prompt: Optional[BasePromptTemplate] = None
"""[Deprecated] Prompt to use to translate natural language to SQL."""
top_k: int = float('inf')
"""Number of results to return from the query"""
input_key: str = "query" #: :meta private:
output_key: str = "result" #: :meta private:
return_sql: bool = False
"""Will return sql-command directly without executing it"""
return_intermediate_steps: bool = False
"""Whether or not to return the intermediate steps along with the final answer."""
return_direct: bool = False
"""Whether or not to return the result of querying the SQL table directly."""
use_query_checker: bool = False
"""Whether or not the query checker tool should be used to attempt
to fix the initial SQL from the LLM."""
query_checker_prompt: Optional[BasePromptTemplate] = None
"""The prompt template that should be used by the query checker"""
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
@root_validator(pre=True)
def raise_deprecation(cls, values: Dict) -> Dict:
if "llm" in values:
warnings.warn(
"Directly instantiating an SQLDatabaseChain with an llm is deprecated. "
"Please instantiate with llm_chain argument or using the from_llm "
"class method."
)
if "llm_chain" not in values and values["llm"] is not None:
database = values["database"]
prompt = values.get("prompt") or SQL_PROMPTS.get(
database.dialect, PROMPT
)
values["llm_chain"] = LLMChain(llm=values["llm"], prompt=prompt)
return values
@property
def input_keys(self) -> List[str]:
"""Return the singular input key.
:meta private:
"""
return [self.input_key]
@property
def output_keys(self) -> List[str]:
"""Return the singular output key.
:meta private:
"""
if not self.return_intermediate_steps:
return [self.output_key]
else:
return [self.output_key, INTERMEDIATE_STEPS_KEY]
def _call(
self,
inputs: Dict[str, Any],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
_run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
input_text = f"{inputs[self.input_key]}\nSQLQuery:"
print("SQLQuery")
_run_manager.on_text(input_text, verbose=self.verbose)
# If not present, then defaults to None which is all tables.
table_names_to_use = inputs.get("table_names_to_use")
table_info = self.database.get_table_info(table_names=table_names_to_use)
llm_inputs = {
"input": input_text,
"top_k": str(self.top_k),
"dialect": self.database.dialect,
"table_info": table_info,
"stop": ["\nSQLResult:"],
}
if self.memory is not None:
for k in self.memory.memory_variables:
llm_inputs[k] = inputs[k]
intermediate_steps: List = []
try:
intermediate_steps.append(llm_inputs.copy()) # input: sql generation
sql_cmd = self.llm_chain.predict(
callbacks=_run_manager.get_child(),
**llm_inputs,
).strip()
if self.return_sql:
return {self.output_key: sql_cmd}
if not self.use_query_checker:
_run_manager.on_text(sql_cmd, color="green", verbose=self.verbose)
intermediate_steps.append(
sql_cmd
) # output: sql generation (no checker)
intermediate_steps.append({"sql_cmd": sql_cmd}) # input: sql exec
print(sql_cmd)
result = self.database.run(sql_cmd)
print(result)
intermediate_steps.append(str(result)) # output: sql exec
else:
query_checker_prompt = self.query_checker_prompt or PromptTemplate(
template=QUERY_CHECKER, input_variables=["query", "dialect"]
)
query_checker_chain = LLMChain(
llm=self.llm_chain.llm, prompt=query_checker_prompt
)
query_checker_inputs = {
"query": sql_cmd,
"dialect": self.database.dialect,
}
checked_sql_command: str = query_checker_chain.predict(
callbacks=_run_manager.get_child(), **query_checker_inputs
).strip()
intermediate_steps.append(
checked_sql_command
) # output: sql generation (checker)
_run_manager.on_text(
checked_sql_command, color="green", verbose=self.verbose
)
intermediate_steps.append(
{"sql_cmd": checked_sql_command}
) # input: sql exec
result = self.database.run(checked_sql_command)
intermediate_steps.append(str(result)) # output: sql exec
sql_cmd = checked_sql_command
_run_manager.on_text("\nSQLResult: ", verbose=self.verbose)
_run_manager.on_text(result, color="yellow", verbose=self.verbose)
# If return direct, we just set the final result equal to
# the result of the sql query result, otherwise try to get a human readable
# final answer
if self.return_direct:
final_result = result
else:
_run_manager.on_text("\nAnswer:", verbose=self.verbose)
# if result:
# input_text += f"{sql_cmd}\nSQLResult: {result}\nAnswer:"
# else:
# input_text += f"{sql_cmd}\nSQLResult: {result}\nAnswer: {'No result found' if not result else ''}"
input_text += f"{sql_cmd}\nSQLResult: {result}\nAnswer:"
llm_inputs["input"] = input_text
intermediate_steps.append(llm_inputs.copy()) # input: final answer
final_result = self.llm_chain.predict(
callbacks=_run_manager.get_child(),
**llm_inputs,
).strip()
# print("------", result)
if not result:
final_result = 'Invalid Question'
# print("....",final_result)
intermediate_steps.append(final_result) # output: final answer
_run_manager.on_text(final_result, color="green", verbose=self.verbose)
chain_result: Dict[str, Any] = {self.output_key: final_result}
if self.return_intermediate_steps:
chain_result[INTERMEDIATE_STEPS_KEY] = intermediate_steps
print("----"+str(chain_result)+"-----")
return chain_result
except Exception as exc:
# Append intermediate steps to exception, to aid in logging and later
# improvement of few shot prompt seeds
exc.intermediate_steps = intermediate_steps # type: ignore
raise exc
@property
def _chain_type(self) -> str:
return "sql_database_chain"
@classmethod
def from_llm(
cls,
llm: BaseLanguageModel,
db: SQLDatabase,
prompt: Optional[BasePromptTemplate] = None,
**kwargs: Any,
) -> SQLDatabaseChain:
"""Create a SQLDatabaseChain from an LLM and a database connection.
*Security note*: Make sure that the database connection uses credentials
that are narrowly-scoped to only include the permissions this chain needs.
Failure to do so may result in data corruption or loss, since this chain may
attempt commands like `DROP TABLE` or `INSERT` if appropriately prompted.
The best way to guard against such negative outcomes is to (as appropriate)
limit the permissions granted to the credentials used with this chain.
This issue shows an example negative outcome if these steps are not taken:
https://github.com/langchain-ai/langchain/issues/5923
"""
prompt = prompt or SQL_PROMPTS.get(db.dialect, PROMPT)
llm_chain = LLMChain(llm=llm, prompt=prompt)
return cls(llm_chain=llm_chain, database=db, **kwargs)
class SQLDatabaseSequentialChain(Chain):
"""Chain for querying SQL database that is a sequential chain.
The chain is as follows:
1. Based on the query, determine which tables to use.
2. Based on those tables, call the normal SQL database chain.
3. Don't consider the table which are not mentoned, if no result is matching with the keyword Please return the answer as invalid question
This is useful in cases where the number of tables in the database is large.
"""
decider_chain: LLMChain
sql_chain: SQLDatabaseChain
input_key: str = "query" #: :meta private:
output_key: str = "result" #: :meta private:
return_intermediate_steps: bool = False
@classmethod
def from_llm(
cls,
llm: BaseLanguageModel,
db: SQLDatabase,
query_prompt: BasePromptTemplate = PROMPT,
decider_prompt: BasePromptTemplate = DECIDER_PROMPT,
**kwargs: Any,
) -> SQLDatabaseSequentialChain:
"""Load the necessary chains."""
sql_chain = SQLDatabaseChain.from_llm(llm, db, prompt=query_prompt, **kwargs)
decider_chain = LLMChain(
llm=llm, prompt=decider_prompt, output_key="table_names"
)
return cls(sql_chain=sql_chain, decider_chain=decider_chain, **kwargs)
@property
def input_keys(self) -> List[str]:
"""Return the singular input key.
:meta private:
"""
return [self.input_key]
@property
def output_keys(self) -> List[str]:
"""Return the singular output key.
:meta private:
"""
if not self.return_intermediate_steps:
return [self.output_key]
else:
return [self.output_key, INTERMEDIATE_STEPS_KEY]
def _call(
self,
inputs: Dict[str, Any],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
_run_manager = run_manager or CallbackManagerForChainRun.get_noop_manager()
_table_names = self.sql_chain.database.get_usable_table_names()
table_names = ", ".join(_table_names)
llm_inputs = {
"query": inputs[self.input_key],
"table_names": table_names,
}
_lowercased_table_names = [name.lower() for name in _table_names]
table_names_from_chain = self.decider_chain.predict_and_parse(**llm_inputs)
table_names_to_use = [
name
for name in table_names_from_chain
if name.lower() in _lowercased_table_names
]
_run_manager.on_text("Table names to use:", end="\n", verbose=self.verbose)
_run_manager.on_text(
str(table_names_to_use), color="yellow", verbose=self.verbose
)
new_inputs = {
self.sql_chain.input_key: inputs[self.input_key],
"table_names_to_use": table_names_to_use,
}
return self.sql_chain(
new_inputs, callbacks=_run_manager.get_child(), return_only_outputs=True
)
@property
def _chain_type(self) -> str:
return "sql_database_sequential_chain"
### System Info
python: 3.11
langchain: latest
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | fetching inaccurate answers from the database | https://api.github.com/repos/langchain-ai/langchain/issues/16491/comments | 13 | 2024-01-24T05:09:25Z | 2024-05-01T16:06:59Z | https://github.com/langchain-ai/langchain/issues/16491 | 2,097,440,450 | 16,491 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
`<html>
<body>
<!--StartFragment-->
agent_executor = create_sql_agent(
--
| prefix=MSSQL_AGENT_PREFIX,
| format_instructions=MSSQL_AGENT_OUTPUT_FORMAT_INSTRUCTIONS,
| suffix=MSSQL_AGENT_SUFFIX,
| llm=llm,
| toolkit=toolkit,
| extra_tools=custom_tool_list,
| agent_type=AgentType.OPENAI_FUNCTIONS,
| callback_manager=self.callbacks,
| top_k=self.k,
| verbose=True,
| handle_parsing_errors=True,
| return_intermediate_steps=True
| )
<!--EndFragment-->
</body>
</html>`
### Description
We are using create_sql_agent and want to show final response i.e. db records in HTML table and short summary.
When we are asking LLM (GPT-4) to generate the final response in HTML the completion tokens are around 1200 tokens resulting in high latency.
To overcome the latency issue we want to generate HTML table seperately and want LLM to provide insights in short summary.
To achieve the above mentioned things, we just want SQL query as output.
Are there any ways using callbacks or something else to achieve the same?
Please let us know if you have ideas for the same.
Thanks a lot!
### System Info
<html>
<body>
<!--StartFragment-->
langchain==0.0.351
--
| langchain-community==0.0.4
| langchain-core==0.1.1
| langsmith==0.0.72
<!--EndFragment-->
</body>
</html>
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | How to return SQL query only in create_sql_agent to avoid latency because of high completion tokens? | https://api.github.com/repos/langchain-ai/langchain/issues/16489/comments | 9 | 2024-01-24T04:29:10Z | 2024-05-01T16:06:53Z | https://github.com/langchain-ai/langchain/issues/16489 | 2,097,406,827 | 16,489 |
[
"hwchase17",
"langchain"
] | ### Feature request
UnstructuredFileLoader currently only supports local files and the host Unstructured API. This request is to expand the loader ingest capabilities in the Python library by adding options for S3 (streaming) and a bytearray.
### Motivation
Running in environments like Kubernetes, it is inconvenient to have to fetch a document and store it local to the container in order to be processed. This requires planning for storage capacity (either local or configuring PVCs).
### Your contribution
Possibly, but not any time soon. | UnstructuredFileLoader support for S3 and bytearray | https://api.github.com/repos/langchain-ai/langchain/issues/16488/comments | 1 | 2024-01-24T02:49:00Z | 2024-05-01T16:06:49Z | https://github.com/langchain-ai/langchain/issues/16488 | 2,097,319,644 | 16,488 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
from langchain_community.utilities import SQLDatabase
from langchain_experimental.sql import SQLDatabaseChain
from langchain_openai import OpenAI
from langchain.chains import create_sql_query_chain
from langchain_openai import ChatOpenAI
# it is initial by https://python.langchain.com/docs/use_cases/qa_structured/sql/#case-2-text-to-sql-query-and-execution
db = SQLDatabase.from_uri("sqlite:///./Chinook.db")
chain = create_sql_query_chain(ChatOpenAI(
openai_api_key = "my-api-here",
openai_api_base = "http://gpt-proxy.jd.com/gateway/azure",
temperature = 0,
model = 'gpt-35-turbo-1106'), db)
response = chain.invoke({"question": "How many employees are there"})
print(response)
```
I test the llm by this below
```
llm = ChatOpenAI(
openai_api_key = 'my-api-here',
openai_api_base = "http://gpt-proxy.jd.com/gateway/azure",
temperature = 0,
model = 'gpt-35-turbo-1106'
)
template = """Question: {question}
Answer: Let's think step by step"""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Beiber was born?"
llm_chain.run(question)
```
This one works well and generate result like
```
Justin Bieber was born on March 1, 1994. The Super Bowl for the 1993 NFL season was Super Bowl XXVIII, which was won by the Dallas Cowboys. Therefore, the Dallas Cowboys won the Super Bowl in the year Justin Bieber was born.
```
### Description
I get error when using sqlchain
the error is like below
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In [7], line 7
1 chain = create_sql_query_chain(ChatOpenAI(
2 openai_api_key = 'mask-here,
3 openai_api_base = "http://gpt-proxy.jd.com/gateway/azure",
4 temperature = 0,
5 model = 'gpt-35-turbo-1106'),
6 db)
----> 7 response = chain.invoke({"question": "How many employees are there"})
8 print(response)
File ~/.pyenv/versions/3.8.10/lib/python3.8/site-packages/langchain_core/runnables/base.py:2053, in RunnableSequence.invoke(self, input, config)
2051 try:
2052 for i, step in enumerate(self.steps):
-> 2053 input = step.invoke(
2054 input,
2055 # mark each step as a child run
2056 patch_config(
2057 config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
2058 ),
2059 )
2060 # finish the root run
2061 except BaseException as e:
File ~/.pyenv/versions/3.8.10/lib/python3.8/site-packages/langchain_core/runnables/base.py:3887, in RunnableBindingBase.invoke(self, input, config, **kwargs)
3881 def invoke(
3882 self,
3883 input: Input,
3884 config: Optional[RunnableConfig] = None,
3885 **kwargs: Optional[Any],
3886 ) -> Output:
-> 3887 return self.bound.invoke(
3888 input,
3889 self._merge_configs(config),
3890 **{**self.kwargs, **kwargs},
3891 )
File ~/.pyenv/versions/3.8.10/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py:165, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
154 def invoke(
155 self,
156 input: LanguageModelInput,
(...)
160 **kwargs: Any,
161 ) -> BaseMessage:
162 config = ensure_config(config)
163 return cast(
164 ChatGeneration,
--> 165 self.generate_prompt(
166 [self._convert_input(input)],
167 stop=stop,
168 callbacks=config.get("callbacks"),
169 tags=config.get("tags"),
170 metadata=config.get("metadata"),
171 run_name=config.get("run_name"),
172 **kwargs,
173 ).generations[0][0],
174 ).message
File ~/.pyenv/versions/3.8.10/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py:543, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
535 def generate_prompt(
536 self,
537 prompts: List[PromptValue],
(...)
540 **kwargs: Any,
541 ) -> LLMResult:
542 prompt_messages = [p.to_messages() for p in prompts]
--> 543 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File ~/.pyenv/versions/3.8.10/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py:407, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
405 if run_managers:
406 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 407 raise e
408 flattened_outputs = [
409 LLMResult(generations=[res.generations], llm_output=res.llm_output)
410 for res in results
411 ]
412 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File ~/.pyenv/versions/3.8.10/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py:397, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
394 for i, m in enumerate(messages):
395 try:
396 results.append(
--> 397 self._generate_with_cache(
398 m,
399 stop=stop,
400 run_manager=run_managers[i] if run_managers else None,
401 **kwargs,
402 )
403 )
404 except BaseException as e:
405 if run_managers:
File ~/.pyenv/versions/3.8.10/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py:576, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
572 raise ValueError(
573 "Asked to cache, but no cache found at `langchain.cache`."
574 )
575 if new_arg_supported:
--> 576 return self._generate(
577 messages, stop=stop, run_manager=run_manager, **kwargs
578 )
579 else:
580 return self._generate(messages, stop=stop, **kwargs)
File ~/.pyenv/versions/3.8.10/lib/python3.8/site-packages/langchain_openai/chat_models/base.py:442, in ChatOpenAI._generate(self, messages, stop, run_manager, stream, **kwargs)
436 params = {
437 **params,
438 **({"stream": stream} if stream is not None else {}),
439 **kwargs,
440 }
441 response = self.client.create(messages=message_dicts, **params)
--> 442 return self._create_chat_result(response)
File ~/.pyenv/versions/3.8.10/lib/python3.8/site-packages/langchain_openai/chat_models/base.py:459, in ChatOpenAI._create_chat_result(self, response)
457 if not isinstance(response, dict):
458 response = response.dict()
--> 459 for res in response["choices"]:
460 message = _convert_dict_to_message(res["message"])
461 generation_info = dict(finish_reason=res.get("finish_reason"))
TypeError: 'NoneType' object is not iterable
```
### System Info
langchian==0.1.1
python3.8.6
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Using create_sql_query_chain get Nonetype error but LLMChain can generate correctly | https://api.github.com/repos/langchain-ai/langchain/issues/16484/comments | 1 | 2024-01-24T01:54:32Z | 2024-05-01T16:06:43Z | https://github.com/langchain-ai/langchain/issues/16484 | 2,097,277,679 | 16,484 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
calling `_prepare_range_query` with both a filter and distance_threshold arguments returns the following exception:
```
Traceback (most recent call last):
File "/Users/me/Codes/myproject/components/storage/redisHandler.py", line 122, in run_similarity_search
return vectorstore.similarity_search_with_score(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/me/Codes/myproject/venv/lib/python3.11/site-packages/langchain_community/vectorstores/redis/base.py", line 837, in similarity_search_with_score
raise e
File "/Users/me/Codes/myproject/venv/lib/python3.11/site-packages/langchain_community/vectorstores/redis/base.py", line 828, in similarity_search_with_score
results = self.client.ft(self.index_name).search(redis_query, params_dict) # type: ignore # noqa: E501
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/me/Codes/myproject/venv/lib/python3.11/site-packages/redis/commands/search/commands.py", line 501, in search
res = self.execute_command(SEARCH_CMD, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/me/Codes/myproject/venv/lib/python3.11/site-packages/redis/client.py", line 536, in execute_command
return conn.retry.call_with_retry(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/me/Codes/myproject/venv/lib/python3.11/site-packages/redis/retry.py", line 46, in call_with_retry
return do()
^^^^
File "/Users/me/Codes/myproject/venv/lib/python3.11/site-packages/redis/client.py", line 537, in <lambda>
lambda: self._send_command_parse_response(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/me/Codes/myproject/venv/lib/python3.11/site-packages/redis/client.py", line 513, in _send_command_parse_response
return self.parse_response(conn, command_name, **options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/me/Codes/myproject/venv/lib/python3.11/site-packages/redis/client.py", line 553, in parse_response
response = connection.read_response()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/me/Codes/myproject/venv/lib/python3.11/site-packages/redis/connection.py", line 524, in read_response
raise response
redis.exceptions.ResponseError: Invalid attribute yield_distance_as
```
Is there a way to provide both arguments ?
### Description
* I'm trying to use langchain for vector search on redis
* using `similarity_search_with_score` with a `distance_threshold` and `filter` invoke `_prepare_range_query` with both argument
* this generate the following Redis Query `(@content_vector:[VECTOR_RANGE $distance_threshold $vector] (@creation_date:[1705359600.0 +inf] @creation_date:[-inf 1705878000.0]))=>{$yield_distance_as: distance}``
* this query generate the previously given stack trace
### System Info
MacOS Python 3.11
langchain==0.0.349
langchain-community==0.0.1
langchain-core==0.0.13
redis==5.0.1
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Using both filter and distance_threshold generate an `Invalid attribute yield_distance_as` | https://api.github.com/repos/langchain-ai/langchain/issues/16476/comments | 4 | 2024-01-23T19:48:56Z | 2024-06-08T16:09:15Z | https://github.com/langchain-ai/langchain/issues/16476 | 2,096,865,560 | 16,476 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
LangChain follows a monorepo architecture. It's difficult to see from the releases which packages were released, and which PRs went into them. https://github.com/langchain-ai/langchain/releases
We should update CI to draft a better release note with package information and potentially break PRs by package. | CI: Draft more readable release drafts that are broken down by package | https://api.github.com/repos/langchain-ai/langchain/issues/16471/comments | 1 | 2024-01-23T19:12:40Z | 2024-04-30T16:26:48Z | https://github.com/langchain-ai/langchain/issues/16471 | 2,096,805,196 | 16,471 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
db = DeepLake(dataset_path=dataset_path, embedding=embeddings)
retriver = db.as_retriever()
QUERY_PROMPT = PromptTemplate(
input_variables=["inputs"],
template=""" Use the input to retrieve the relevant information or data from the retriever & generate results based on the data
inputs = {inputs}
Generate new ideas & lay out all the information like Game Name, Mechanics, Objective, USP, Level fail condition & Rules. Get the idea from the dataset similar to as they have been described where number is equals to the number of ideas you need to generate.
"""
)
llm = ChatOpenAI(temperature=0.4)
query = "Generate 3 new game idea which includes solving puzzels, only get the idea from the retriever not the whole information,\n Learn the underlying semantics about their game design, mechaincs, USP & other details, do not just copy paste the information fromt the dataset, Learn & generate new ideas\n Verify your results that they do not match 100% to the info available at the dataset"
# Chain
llm_chain = LLMChain(llm=llm, prompt=QUERY_PROMPT)
# Other inputs
# query="Generate 3 new game idea which includes solving puzzels"
inputs = {"inputs" : query}
# Run
retriever_one = MultiQueryRetriever(
retriever=retriver, llm_chain=llm_chain
)
# Results
unique_docs = retriever_one.get_relevant_documents(
query="Generate 3 new game idea which includes solving puzzels", inputs=inputs
)
```
Error Thrown:
ValueError Traceback (most recent call last)
Cell In[28], line 7
2 retriever_one = MultiQueryRetriever(
3 retriever=retriver, llm_chain=llm_chain
4 )
6 # Results
----> 7 unique_docs = retriever_one.get_relevant_documents(
8 query="Generate 3 new game idea which includes solving puzzels", inputs=inputs
9 )
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\retrievers.py:223, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
221 except Exception as e:
222 run_manager.on_retriever_error(e)
--> 223 raise e
224 else:
225 run_manager.on_retriever_end(
226 result,
227 **kwargs,
228 )
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\retrievers.py:216, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
214 _kwargs = kwargs if self._expects_other_args else {}
215 if self._new_arg_supported:
--> 216 result = self._get_relevant_documents(
217 query, run_manager=run_manager, **_kwargs
218 )
219 else:
220 result = self._get_relevant_documents(query, **_kwargs)
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\retrievers\multi_query.py:172, in MultiQueryRetriever._get_relevant_documents(self, query, run_manager)
158 def _get_relevant_documents(
159 self,
160 query: str,
161 *,
162 run_manager: CallbackManagerForRetrieverRun,
163 ) -> List[Document]:
164 """Get relevant documents given a user query.
165
166 Args:
(...)
170 Unique union of relevant documents from all generated queries
171 """
--> 172 queries = self.generate_queries(query, run_manager)
173 if self.include_original:
174 queries.append(query)
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\retrievers\multi_query.py:189, in MultiQueryRetriever.generate_queries(self, question, run_manager)
178 def generate_queries(
179 self, question: str, run_manager: CallbackManagerForRetrieverRun
180 ) -> List[str]:
181 """Generate queries based upon user input.
182
183 Args:
(...)
187 List of LLM generated queries that are similar to the user input
188 """
--> 189 response = self.llm_chain(
190 {"question": question}, callbacks=run_manager.get_child()
191 )
192 lines = getattr(response["text"], self.parser_key, [])
193 if self.verbose:
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain_core\_api\deprecation.py:145, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
143 warned = True
144 emit_warning()
--> 145 return wrapped(*args, **kwargs)
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py:363, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
331 """Execute the chain.
332
333 Args:
(...)
354 `Chain.output_keys`.
355 """
356 config = {
357 "callbacks": callbacks,
358 "tags": tags,
359 "metadata": metadata,
360 "run_name": run_name,
361 }
--> 363 return self.invoke(
364 inputs,
365 cast(RunnableConfig, {k: v for k, v in config.items() if v is not None}),
366 return_only_outputs=return_only_outputs,
367 include_run_info=include_run_info,
368 )
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py:138, in Chain.invoke(self, input, config, **kwargs)
135 include_run_info = kwargs.get("include_run_info", False)
136 return_only_outputs = kwargs.get("return_only_outputs", False)
--> 138 inputs = self.prep_inputs(input)
139 callback_manager = CallbackManager.configure(
140 callbacks,
141 self.callbacks,
(...)
146 self.metadata,
147 )
148 new_arg_supported = inspect.signature(self._call).parameters.get("run_manager")
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py:475, in Chain.prep_inputs(self, inputs)
473 external_context = self.memory.load_memory_variables(inputs)
474 inputs = dict(inputs, **external_context)
--> 475 self._validate_inputs(inputs)
476 return inputs
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py:264, in Chain._validate_inputs(self, inputs)
262 missing_keys = set(self.input_keys).difference(inputs)
263 if missing_keys:
--> 264 raise ValueError(f"Missing some input keys: {missing_keys}")
ValueError: Missing some input keys: {'inputs'}
### Description
* I am trying to retrieve relevant information based on the given input from the database & want it to generate answers,
* But it is showing some {input} is missing.
* I am using MultiQuery Retriever
### System Info
langchain==0.1.2
langchain-community==0.0.14
langchain-core==0.1.14
langchain-openai==0.0.3
langchainplus-sdk==0.0.20
jupyter-notebook ==7.0.0
Python==3.11.3
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | ValueError: Missing some input keys: {'inputs'} | https://api.github.com/repos/langchain-ai/langchain/issues/16465/comments | 2 | 2024-01-23T17:54:05Z | 2024-01-23T18:35:37Z | https://github.com/langchain-ai/langchain/issues/16465 | 2,096,652,486 | 16,465 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
Runnable Parallel is poorly documented right now and it's one of the most important constructs in LCEL. https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/runnables/base.py#L2453-L2453
### Idea or request for content:
Add documentation to RunnableParallel in this style:
https://github.com/langchain-ai/langchain/blob/cfe95ab08521ddc01e9b65596ca50c9dba2d7677/libs/core/langchain_core/runnables/base.py#L102-L102
https://github.com/langchain-ai/langchain/blob/cfe95ab08521ddc01e9b65596ca50c9dba2d7677/libs/core/langchain_core/runnables/base.py#L1754-L1754
| DOC: Add in code documentation to RunnableParallel | https://api.github.com/repos/langchain-ai/langchain/issues/16462/comments | 1 | 2024-01-23T17:06:17Z | 2024-01-26T15:03:55Z | https://github.com/langchain-ai/langchain/issues/16462 | 2,096,565,174 | 16,462 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
The below code does not print anything when the `test_tool` is run. Variations of this work for `on_llm_start` and `on_llm_end` but do not work for `on_tool_start` or `on_tool_end`
```
from langchain.callbacks.base import BaseCallbackHandler
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_openai import ChatOpenAI
class MyCustomHandler(BaseCallbackHandler):
def on_tool_start(self, serialized: Dict[str, Any], **kwargs: Any) -> Any:
"""Run when tool starts running."""
print(f"on_tool_start (A tool is starting!!)")
def on_tool_start(
self, serialized: Dict[str, Any], input_str: str, **kwargs: Any
) -> Any:
"""Run when tool starts running."""
print(f"on_tool_start (I'm starting!!)")
def on_tool_end(self, output: str, **kwargs: Any) -> Any:
"""Run when tool ends running."""
print(f"on_tool_end (I'm ending!!)")
@tool("test-tool")
def test_tool():
"""A tool that should always be run"""
return "This result should always be returned"
llm = ChatOpenAI(
callbacks=[MyCustomHandler()],
)
tools = [test_tool]
agent = create_openai_tools_agent(llm, tools, prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
return_intermediate_steps=True,
verbose=True,
callbacks=[MyCustomHandler()],
)
response = agent_executor.invoke(
{
"input": "please tell me the results of my test tool,
}
)
```
### Description
I am trying to get a simple custom callback running when an agent invokes a tool.
### System Info
Python 3.9.16
langchain==0.1.2
langchain-community==0.0.14
langchain-core==0.1.14
langchain-openai==0.0.3
langchainhub==0.1.14
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async | Custom Callback handler doesn't run for `on_tool_start` or `on_tool_end` | https://api.github.com/repos/langchain-ai/langchain/issues/16461/comments | 2 | 2024-01-23T16:50:04Z | 2024-01-23T17:12:03Z | https://github.com/langchain-ai/langchain/issues/16461 | 2,096,528,091 | 16,461 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
SAMPLE_JSON_OUTPUT_FROM_LLM_1:str = json.dumps({
"name" : "Henry",
"age" : 25
})
INVALID_JSON_STRING = SAMPLE_JSON_OUTPUT_FROM_LLM_1.replace("\"n", "n")
# Value of INVALID_JSON_STRING is '{name": "Henry", "age": 25}'
# Please note that the `name` key is not formatted properly
from langchain.output_parsers.json import SimpleJsonOutputParser
json_op = SimpleJsonOutputParser()
result = json_op.parse(INVALID_JSON_STRING)
# result is {} whereas I was expecting an error/exception
```
### Description
I was trying to test both the positive and negative cases for JsonOutpuParser
In the above code snippet you can see that I removed the " on the first key
I debugged the langchain code, it seems that there is a lot of effort to ignore the invalid/troubling characters and that leads to changing the original string to {}
I would have expected following behavior:
- By default, no trying to fix the provided string
- May be throw an error if the output is {}
### System Info
pip freeze | grep langchain
langchain==0.1.2
langchain-community==0.0.14
langchain-core==0.1.14
langchain-openai==0.0.3
langchainhub==0.1.14
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | JsonOutputParser does not throw exception on invalid json | https://api.github.com/repos/langchain-ai/langchain/issues/16458/comments | 6 | 2024-01-23T16:02:26Z | 2024-04-30T16:28:05Z | https://github.com/langchain-ai/langchain/issues/16458 | 2,096,421,313 | 16,458 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
I my ReAct agent, I'm trying to call a tool, that is defined as the following:
```python
@tool(args_schema=SliderInput)
def slider(object_name, value) -> str:
return "Ok"
```
and the corresponding pydantic model:
```python
class SliderInput(BaseModel):
object_name: str = Field(..., description="The name of the slider object")
value: int = Field(..., description="The value of the slider object to set")
```
I get the following error when the tool is being called:
```python
object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for SliderInput
value
field required (type=value_error.missing)
```
I started with this function definition:
```python
@tool
def slider(object_name: str, value: int) -> str:
return "Ok"
```
and received the same error.
Additionally, I've also tried this one:
```python
@tool(args_schema=SliderInput)
def slider(object_name: str, value: int) -> str:
return "Ok"
```
without success.
Why is my code failing.
### Description
I'm trying to use langchain library **Version: 0.1.1**
### System Info
langchain==0.1.1
langchain-cli==0.0.20
langchain-community==0.0.13
langchain-core==0.1.12
langchain-experimental==0.0.49
langchain-openai==0.0.2.post1
langchainhub==0.1.14
PRETTY_NAME="Ubuntu 22.04.3 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.3 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | ValidationError field required (type=value_error.missing) while using react agent | https://api.github.com/repos/langchain-ai/langchain/issues/16456/comments | 3 | 2024-01-23T15:51:48Z | 2024-03-03T11:28:30Z | https://github.com/langchain-ai/langchain/issues/16456 | 2,096,399,198 | 16,456 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
The correct type for group_id is Integer.
### Description
The current implementation of SQLRecordManager crashes with a database other than sqlite. Because of a schema bug. Sqlite is tolerant.
The correct type for group_id is Integer.
### System Info
langchain-community == 0.0.14
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | A bad schema in SQLRecordManager generates an exception when def index() with cleanup="incremental". | https://api.github.com/repos/langchain-ai/langchain/issues/16451/comments | 2 | 2024-01-23T13:43:07Z | 2024-04-30T16:22:03Z | https://github.com/langchain-ai/langchain/issues/16451 | 2,096,124,684 | 16,451 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
from langchain.chains import LLMChain
from langchain.memory import ConversationSummaryMemory
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
)
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"You are a nice chatbot having a conversation with a human."
),
# The `variable_name` here is what must align with memory
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{question}"),
]
)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
memory.chat_memory.add_user_message("hi, i am Kevin")
memory.chat_memory.add_ai_message("hi, i am nan")
memory.chat_memory.add_user_message("i am late for school")
memory.chat_memory.add_ai_message("oh, sound bad, i hope you can be happier")
memory.chat_memory.add_user_message("in fact, because of mom's mistake, she forgot something always")
memory.chat_memory.add_ai_message("i see, it does not matter, little thing")
memory.chat_memory.add_user_message("ok, let's chat something ")
memory.chat_memory.add_ai_message("sure, i like chat too")
conversation = LLMChain(llm=llm, prompt=prompt, verbose=True, memory=memory)
conversation({"question": "can you tell me why i was late for school"})
```
### Description
response is :Sure, I believe that you might have missed the time to leave or forgotten something important during the preparation process.
i hope response is: because of mom's mistake, she forgot something always
--------------------------------------------------------------
i wonder where is wrong?
thanks
### System Info
langchain 0.0.354
langchain-community 0.0.10
langchain-core 0.1.8
langdetect 1.0.9
langsmith 0.0.78
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | ConversationBufferMemory does not work | https://api.github.com/repos/langchain-ai/langchain/issues/16448/comments | 9 | 2024-01-23T12:14:31Z | 2024-01-24T12:49:58Z | https://github.com/langchain-ai/langchain/issues/16448 | 2,095,956,356 | 16,448 |
[
"hwchase17",
"langchain"
] | ### Feature request
I am using BM25 retriever from langchain.
After building the retriever from documents, how do I get score for relevant document for a query?
Retriever = BM25Retriever.from_documents(...)
result = retriever.get_relevant_documents("foo")
### Motivation
The documentation is not good with details of parameters missing.
Actual BM25 python package has all functional ities including tokenizer options.
### Your contribution
Could try | BM25 langchain retriever should have similarity score | https://api.github.com/repos/langchain-ai/langchain/issues/16445/comments | 6 | 2024-01-23T11:47:53Z | 2024-05-13T16:09:56Z | https://github.com/langchain-ai/langchain/issues/16445 | 2,095,906,630 | 16,445 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
embeddings = HuggingFaceInferenceAPIEmbeddings(
api_key=inference_api_key,
api_url=api_url,
model_name="bge-large-en-v1.5"
)
pinecone.init(api_key=os.getenv("PINECONE_API_KEY"), environment=environment)
loader = PyPDFDirectoryLoader("data")
docs = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=200)
chunks = text_splitter.split_documents(docs)
vectordb = Pinecone.from_documents(chunks, embeddings, index_name=index_name, namespace=namespace)
```
this code snippet is getting 314 request code from huggingface.py
```
response = requests.post(
self._api_url,
headers=self._headers,
json={
"inputs": texts,
"options": {"wait_for_model": True, "use_cache": True},
},
)
return response.json()
```
we should support batch size here. like local model embedding
### Description
I am trying to use pinecone with hugging face inference for the embedding model. My total chunks are 420. and it is trying to process in one request.
Also embedding_chunk_size is not parsable from Pinecone.from_documents() method
### System Info
```langchain==0.1.2
langchain-cli==0.0.20
langchain-community==0.0.14
langchain-core==0.1.14
langchainhub==0.1.14
```
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | HuggingFaceInferenceAPIEmbeddings getting 413 request code because of not batching mechanism like SentenceTransformer | https://api.github.com/repos/langchain-ai/langchain/issues/16443/comments | 1 | 2024-01-23T11:14:55Z | 2024-04-30T16:22:03Z | https://github.com/langchain-ai/langchain/issues/16443 | 2,095,842,833 | 16,443 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
The following code can be used to reproduce the problem:
```
from langchain_community.embeddings import LocalAIEmbeddings
embeddings = LocalAIEmbeddings(
openai_api_base="http://localhost:8080"
)
print(embeddings.embed_query("test"))
```
Error: `AttributeError: module 'openai' has no attribute 'error'`
```
Traceback (most recent call last):
File "/home/slug/udemy/010/main.py", line 22, in <module>
print(embeddings.embed_query("test"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/slug/miniconda3/envs/gpt/lib/python3.11/site-packages/langchain_community/embeddings/localai.py", line 332, in embed_query
embedding = self._embedding_func(text, engine=self.deployment)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/slug/miniconda3/envs/gpt/lib/python3.11/site-packages/langchain_community/embeddings/localai.py", line 267, in _embedding_func
return embed_with_retry(
^^^^^^^^^^^^^^^^^
File "/home/slug/miniconda3/envs/gpt/lib/python3.11/site-packages/langchain_community/embeddings/localai.py", line 98, in embed_with_retry
retry_decorator = _create_retry_decorator(embeddings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/slug/miniconda3/envs/gpt/lib/python3.11/site-packages/langchain_community/embeddings/localai.py", line 45, in _create_retry_decorator
retry_if_exception_type(openai.error.Timeout)
^^^^^^^^^^^^
AttributeError: module 'openai' has no attribute 'error'
```
### Description
I am trying to use langchain to invoke Local AI's embedding endpoint to generate embeddings
### System Info
langchain-community==0.0.13
openai==1.9.0
python==3.11.7
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | LocalAIEmbeddings not compatible with openai > 1.6.x | https://api.github.com/repos/langchain-ai/langchain/issues/16442/comments | 1 | 2024-01-23T10:56:53Z | 2024-04-30T16:22:00Z | https://github.com/langchain-ai/langchain/issues/16442 | 2,095,809,240 | 16,442 |
[
"hwchase17",
"langchain"
] | ### Feature request
add an additional try block to function `merge_dicts` in `langchain_core.utils._merge` and makes function more robust:
original:
```python
def merge_dicts(left: Dict[str, Any], right: Dict[str, Any]) -> Dict[str, Any]:
merged = left.copy()
for k, v in right.items():
if k not in merged:
merged[k] = v
...
else:
raise TypeError(
f"Additional kwargs key {k} already exists in left dict and value has "
f"unsupported type {type(merged[k])}."
)
return merged
```
new:
```python
def merge_dicts(left: Dict[str, Any], right: Dict[str, Any]) -> Dict[str, Any]:
merged = left.copy()
for k, v in right.items():
if k not in merged:
merged[k] = v
...
else:
try:
merged[k] = str(v)
except Exception as e:
raise TypeError(
f"Additional kwargs key {k} already exists in left dict and value has "
f"unsupported type {type(merged[k])}."
)
return merged
```
### Motivation
Some function has been malfunctioned due to the introduction of `merge_dicts` since langchain-core 0.1.13 releasing, we think this function should be more robust for a more generic scenario.
### Your contribution
as above | ADD A TRY BLOCK TO `merge_dicts` | https://api.github.com/repos/langchain-ai/langchain/issues/16441/comments | 4 | 2024-01-23T10:53:48Z | 2024-04-11T02:29:03Z | https://github.com/langchain-ai/langchain/issues/16441 | 2,095,801,586 | 16,441 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
# cat test_issue.py
from langchain.schema import SystemMessage, HumanMessage
from langchain_openai import AzureChatOpenAI
# pip install -U langchain-community
from langchain_community.callbacks import get_openai_callback
llm = AzureChatOpenAI(
openai_api_version="2023-12-01-preview",
azure_deployment="gpt-35-turbo",
model_name="gpt3.5-turbo"
)
messages = [
SystemMessage(
content=(
"You are ExpertGPT, an AGI system capable of "
"anything except answering questions about cheese. "
"It turns out that AGI does not fathom cheese as a "
"concept, the reason for this is a mystery."
)
),
HumanMessage(content="Tell me about parmigiano, the Italian cheese!")
]
with get_openai_callback() as cb:
res = llm(messages)
print(res.content)
# print the total tokens used
print(cb.total_tokens)
```
### Description
I still have problems with that simple LLM completion request that few months ago run correctly.
After updating langchain modules I got the deprecation error:
$ py test_issue.py
**/home/giorgio/.local/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.7 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(**
I'm sorry, but as an AGI system, I cannot answer questions about cheese, including parmigiano. It seems that cheese is a concept that AGI systems do not comprehend. Is there anything else I can help you with?
114
---
The first question is:
why I had to import langchain-community module, as suggested by an initial run-.time suggestion?!
I didn't find any documentation.
Final question:
Why the deprecation error?
BTW, see also related issue: https://github.com/langchain-ai/langchain/issues/13785
Thanks
giorgio
### System Info
$ python3 --version
Python 3.11.7
$ pip show openai | grep Version
Version: 1.9.0
$ pip show langchain | grep Version
Version: 0.1.2
$ pip show langchain-openai | grep Version
Version: 0.0.3
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async | Azure OpenAI deprecation LangChainDeprecationWarning (the function `__call__`) ? | https://api.github.com/repos/langchain-ai/langchain/issues/16438/comments | 4 | 2024-01-23T10:29:30Z | 2024-05-13T16:09:27Z | https://github.com/langchain-ai/langchain/issues/16438 | 2,095,755,355 | 16,438 |
[
"hwchase17",
"langchain"
] | ### Feature request
CSVAgent currently uses the same CSV file for schema to generate the query and data for executing it to generate results. The proposal is to separate these two files, say a smaller one for generation and larger one for execution.
### Motivation
The size of the CSV impacts generation of the query. Hence we want to provide smaller representative data for generation of the query. Once the query is generated we want to execute it directly on the original CSV, since no LLM is required at this time, which can be much larger than the representative one used for query generation.
### Your contribution
I am not sure about this at this time. If anyone from the community can provide guidance, I will try to take a look. | CSVAgent with different CSV files for schema and data | https://api.github.com/repos/langchain-ai/langchain/issues/16434/comments | 3 | 2024-01-23T07:38:46Z | 2024-04-30T16:13:23Z | https://github.com/langchain-ai/langchain/issues/16434 | 2,095,439,843 | 16,434 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
I set up a chain and parser like following:
```
class Ticket(BaseModel):
reply: str
reply_explanation: str
ticket_parser = PydanticOutputParser(pydantic_object=Ticket)
partial_ticket_prompt = ticket_prompt.partial(
reply_schema=ticket_parser.get_format_instructions(), example=example_data.json()
)
ticket_chain = LLMChain(llm=llm, prompt=partial_ticket_prompt)
```
and than I use it following way:
```
async def generate_ai_suggestions(ticket, similar_docs, macro=""):
ticket = await ticket_chain.apredict(ticket=ticket, similar_docs=similar_docs, macro=macro,configparser={})
return await ticket_parser.aparse(ticket)
```
### Description
my problem is that the tracing not working for me for this convention ( it works for some basic exmaples with "invoke" ) , I tried mulitple ways, including
@traceable(run_type="chain")
is there any solution?
### System Info
langchain==0.0.350
langchain-community==0.0.3
langchain-core==0.1.1
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [X] Async | Tracing in langsmith not working for LLMChan | https://api.github.com/repos/langchain-ai/langchain/issues/16429/comments | 4 | 2024-01-23T07:20:28Z | 2024-06-19T16:06:48Z | https://github.com/langchain-ai/langchain/issues/16429 | 2,095,414,309 | 16,429 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
def custom_length_function(text):
return len(tokenizer.encode(text))
def split_data_token(doc, chunk_size=512, overlap=0):
text_splitter = CharacterTextSplitter(
# separator="\n",
separator=" ",
chunk_size=chunk_size,
chunk_overlap=overlap,
# length_function=custom_length_function,
length_function=len,
is_separator_regex=False,
)
return text_splitter.split_documents(doc)
### Description
it works well when using **"RecursiveCharacterTextSplitter.from_huggingface_tokenizer"**
because i can choose separators like ["\n\n", "\n", " ", ""]
but **"CharacterTextSplitter.from_huggingface_tokenizer"** didnt works because it only seperate "\n" base and
i tried **"CharacterTextSplitter"** to set separator like below
text_splitter = CharacterTextSplitter(
separator = "\n",
is_separator_regex = False)
text_splitter.from_huggingface_tokenizer(
tokenizer=tokenizer,
chunk_size=chunk_size,
chunk_overlap=overlap,
)
but it didnt work and custom_length_function also didnt work properly
def custom_length_function(text):
return len(tokenizer.encode(text))
def split_data_token(doc, chunk_size=512, overlap=0):
text_splitter = CharacterTextSplitter(
# separator="\n",
separator=" ",
chunk_size=chunk_size,
chunk_overlap=overlap,
# length_function=custom_length_function,
length_function=len,
is_separator_regex=False,
)
return text_splitter.split_documents(doc)
### System Info
.
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | I can't split document by token (CharacterTextSplitter.from_huggingface_tokenizer) | https://api.github.com/repos/langchain-ai/langchain/issues/16427/comments | 1 | 2024-01-23T06:42:11Z | 2024-04-30T16:13:27Z | https://github.com/langchain-ai/langchain/issues/16427 | 2,095,349,082 | 16,427 |
[
"hwchase17",
"langchain"
] | ### Feature request
Using your simple example to parse astream_events, the issue is that is a tool has return_direct=True, the output is not shown -- you can only intercept it "on_tool_end", but event has no indication that output is return_direct.
### Motivation
This will help show those outputs as regular text in a client chat app, rather than a tool call. We have tools like "askUserForMoreInformation" which seem to help it during the conversation flow, so it's an example of one that simply returns the output to user.
### Your contribution
Sure, but I'm not sure will have the time to dig into it, hoping someone more familiar can address. | Add indication of return_direct tools in asteam_events | https://api.github.com/repos/langchain-ai/langchain/issues/16425/comments | 3 | 2024-01-23T05:00:48Z | 2024-04-30T16:26:39Z | https://github.com/langchain-ai/langchain/issues/16425 | 2,095,245,386 | 16,425 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Below is a chain that uses JsonOutputParser. Final result can be streamed with openai. When using anthropic, it is only available once the full response finishes streaming.
Likely there' a systematic difference in terms of how JSON is yielded in streaming between openai vs. anthropic, and our existing JSON parser only handles the convention from OpenAI.
A fix would require generating some chunks of JSON using Anthropic OpenAI and then extending our JsonOutputParser to be able to partially parse them.
```python
from typing import List
from langchain.chat_models import ChatAnthropic
from langchain.prompts import PromptTemplate
from langchain_core.output_parsers import JsonOutputParser
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_openai import ChatOpenAI
# Define your desired data structure.
class Joke(BaseModel):
setup: str = Field(description="question to set up a joke.")
punchline: str = Field(description="answer to resolve the joke")
rating: int = Field(description='rating from 0-9 about how good the joke is')
# Set up a parser + inject instructions into the prompt template.
parser = JsonOutputParser(pydantic_object=Joke)
prompt = PromptTemplate.from_template(
template="Answer the user quer using a long jokey.\n{format_instructions}\n{query}\n",
).partial(format_instructions=parser.get_format_instructions())
model = ChatAnthropic(temperature=0)
# model = ChatOpenAI(temperature=0)
chain = prompt | model | parser
async for s in chain.astream({"query": "tell me a joke about space"}):
print(s)
```
---
Potentially hard task for for folks without background in CS (i.e., if you what a recursive descent parser is you feel free to pick this up :)) | JsonOutputParser Streaming works with ChatOpenAI but not ChatAnthropic | https://api.github.com/repos/langchain-ai/langchain/issues/16423/comments | 2 | 2024-01-23T04:04:13Z | 2024-02-05T21:32:17Z | https://github.com/langchain-ai/langchain/issues/16423 | 2,095,197,510 | 16,423 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
from langchain_community.llms import Tongyi
import os
from langchain_community.document_loaders import WebBaseLoader
from langchain.embeddings import DashScopeEmbeddings
from langchain_community.vectorstores import FAISS
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_core.documents import Document
from langchain.chains import create_retrieval_chain
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain.tools.retriever import create_retriever_tool
# 引入大模型
os.environ["DASHSCOPE_API_KEY"] = "my_api_key"
llm = Tongyi(model_name="qwen-turbo")
loader = WebBaseLoader("https://docs.smith.langchain.com/overview")
docs = loader.load()
# 映射到向量空间
embeddings = DashScopeEmbeddings(
model="text-embedding-v1", dashscope_api_key="my_api_key"
)
# 分词
text_splitter = RecursiveCharacterTextSplitter()
documents = text_splitter.split_documents(docs)
vector = FAISS.from_documents(documents, embeddings)
retriever = vector.as_retriever()
retriever_tool = create_retriever_tool(
retriever,
"langsmith_search",
"Search for information about LangSmith. For any questions about LangSmith, you must use this tool!",
)
# with api
os.environ["TAVILY_API_KEY"] = "my_api_key"
search = TavilySearchResults()
tools = [retriever_tool, search]
from langchain import hub
from langchain.agents import AgentExecutor, create_xml_agent
from langchain.agents import create_react_agent, create_json_chat_agent
prompt = hub.pull("hwchase17/react")
# 初始化 agent
agent = create_react_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "what is LangChain?"})
```
inputs:
{'input': 'what is LangChain?'}
> Entering new AgentExecutor chain...
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[2], [line 69](vscode-notebook-cell:?execution_count=2&line=69)
[62](vscode-notebook-cell:?execution_count=2&line=62) agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
[64](vscode-notebook-cell:?execution_count=2&line=64) # chat_history = [HumanMessage(content="Can LangSmith help test my LLM applications?"), AIMessage(content="Yes!")]
[65](vscode-notebook-cell:?execution_count=2&line=65) # agent_executor.invoke({
[66](vscode-notebook-cell:?execution_count=2&line=66) # "chat_history": chat_history,
[67](vscode-notebook-cell:?execution_count=2&line=67) # "input": "Tell me how"
[68](vscode-notebook-cell:?execution_count=2&line=68) # })
---> [69](vscode-notebook-cell:?execution_count=2&line=69) agent_executor.invoke({"input": "what is LangChain?"})
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:164](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:164), in Chain.invoke(self, input, config, **kwargs)
[161](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:161) except BaseException as e:
[163](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:163) run_manager.on_chain_error(e)
--> [164](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:164) raise e
[165](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:165) run_manager.on_chain_end(outputs)
[166](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:166) final_outputs: Dict[str, Any] = self.prep_outputs(
[167](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:167) inputs, outputs, return_only_outputs
[168](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:168) )
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:157](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:157), in Chain.invoke(self, input, config, **kwargs)
[150](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:150) run_manager = callback_manager.on_chain_start(
[151](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:151) dumpd(self),
[152](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:152) inputs,
[153](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py:153) name=run_name,
...
[42](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_core/utils/_merge.py:42) f"unsupported type {type(merged[k])}."
[43](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_core/utils/_merge.py:43) )
[44](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain_core/utils/_merge.py:44) return merged
TypeError: Additional kwargs key output_tokens already exists in left dict and value has unsupported type <class 'int'>.
### Description
* i'm trying to use the langchain to create an angent with `create_react_agent` or `create_json_chat_agent` or `create_xml_agent` with llm `qwen-max-longcontex`. i copy the major code from https://python.langchain.com/docs/get_started/quickstart and modify the code in agent creating , because i not using the openai model. I've been following this document step by step, and everything was running smoothly until I reached this step (creating agent)and started encountering errors.
* i expect to see the result like guide doc
* instead , it encountering errors.
### System Info
langchain==0.1.2
langchain-community==0.0.14
langchain-core==0.1.14
langchain-experimental==0.0.49
langchainhub==0.1.14
macOS 12.6.5
python 3.11.3
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | use qwen to create agent TypeError: Additional kwargs key output_tokens already exists in left dict and value has unsupported type <class 'int'>. | https://api.github.com/repos/langchain-ai/langchain/issues/16422/comments | 4 | 2024-01-23T03:56:38Z | 2024-02-29T03:25:22Z | https://github.com/langchain-ai/langchain/issues/16422 | 2,095,191,993 | 16,422 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
from langchain_community.retrievers import (
GoogleVertexAISearchRetriever,
GoogleCloudEnterpriseSearchRetriever
)
import time
PROJECT_ID = "my_project_id"
SEARCH_ENGINE_ID = "my_datastore_id"
LOCATION_ID = "global"
retriever = GoogleVertexAISearchRetriever(
project_id=PROJECT_ID,
data_store_id=SEARCH_ENGINE_ID,
location_id=LOCATION_ID,
max_documents=3,
engine_data_type=1,
)
while 1:
message = input("Type: ")
print("input message: " + message)
result = retriever.get_relevant_documents(message)
for doc in result:
print(doc)
time.sleep(1) # Add a delay between each request
```
### Description
I tried to use GoogleVertexAISearchRetriever for RAG.
However, the output from ```retriever.get_relevant_documents(message)``` and the output response from GCP console's Vertex AI app preview are different.
At VertexAI Console, I could see the ideal result with the 5 most relevant results, but I couldn't get any response with the langchain script.
### System Info
langchain==0.1.2
langchain-community==0.0.14
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Different output from GoogleVertexAISearchRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/16416/comments | 1 | 2024-01-23T00:11:20Z | 2024-04-30T16:13:23Z | https://github.com/langchain-ai/langchain/issues/16416 | 2,095,003,117 | 16,416 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
I adapted the example for MultiQueryRetrieval to use a local ollama server with llama2 as LLM.
when running it I get an Value error: Expected each embedding in the embeddings to be a list, got [None].
This is my Code:
```python
# Build a sample vectorDB
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import WebBaseLoader
from langchain_community.vectorstores import Chroma
from langchain_community.llms import Ollama
from langchain_community.embeddings import OllamaEmbeddings
from langchain.retrievers.multi_query import MultiQueryRetriever
import logging
model_name = "llama2"
ollama = Ollama(base_url='http://localhost:11434',
model=model_name)
oembed = OllamaEmbeddings(base_url="http://localhost:11434", model=model_name)
# Load blog post
loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")
data = loader.load()
# Split
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)
splits = text_splitter.split_documents(data)
# VectorDB
vectordb = Chroma.from_documents(documents=splits, embedding=oembed)
question = "What are the approaches to Task Decomposition?"
llm =ollama
retriever_from_llm = MultiQueryRetriever.from_llm(
retriever=vectordb.as_retriever(), llm=llm
)
# Set logging for the queries
logging.basicConfig()
logging.getLogger("langchain.retrievers.multi_query").setLevel(logging.INFO)
unique_docs = retriever_from_llm.get_relevant_documents(query=question)
print(len(unique_docs))
```
Does anyone have an idea what to do to fix this?
### Description
This is the error I get:
Traceback (most recent call last):
File "/home/lukas/code/content-assist/test-multi.py", line 41, in <module>
unique_docs = retriever_from_llm.get_relevant_documents(query=question)
File "/home/lukas/.local/lib/python3.10/site-packages/langchain_core/retrievers.py", line 223, in get_relevant_documents
raise e
File "/home/lukas/.local/lib/python3.10/site-packages/langchain_core/retrievers.py", line 216, in get_relevant_documents
result = self._get_relevant_documents(
File "/home/lukas/.local/lib/python3.10/site-packages/langchain/retrievers/multi_query.py", line 175, in _get_relevant_documents
documents = self.retrieve_documents(queries, run_manager)
File "/home/lukas/.local/lib/python3.10/site-packages/langchain/retrievers/multi_query.py", line 210, in retrieve_documents
docs = self.retriever.get_relevant_documents(
File "/home/lukas/.local/lib/python3.10/site-packages/langchain_core/retrievers.py", line 223, in get_relevant_documents
raise e
File "/home/lukas/.local/lib/python3.10/site-packages/langchain_core/retrievers.py", line 216, in get_relevant_documents
result = self._get_relevant_documents(
File "/home/lukas/.local/lib/python3.10/site-packages/langchain_core/vectorstores.py", line 654, in _get_relevant_documents
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
File "/home/lukas/.local/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py", line 348, in similarity_search
docs_and_scores = self.similarity_search_with_score(
File "/home/lukas/.local/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py", line 438, in similarity_search_with_score
results = self.__query_collection(
File "/home/lukas/.local/lib/python3.10/site-packages/langchain_core/utils/utils.py", line 35, in wrapper
return func(*args, **kwargs)
File "/home/lukas/.local/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py", line 155, in __query_collection
return self._collection.query(
File "/home/lukas/.local/lib/python3.10/site-packages/chromadb/api/models/Collection.py", line 188, in query
validate_embeddings(maybe_cast_one_to_many(query_embeddings))
File "/home/lukas/.local/lib/python3.10/site-packages/chromadb/api/types.py", line 311, in validate_embeddings
raise ValueError(
ValueError: Expected each embedding in the embeddings to be a list, got [None]
### System Info
langchain==0.1.2
langchain-community==0.0.14
langchain-core==0.1.14
langchain-openai==0.0.3
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | MultiQueryRetriever with Ollama: ValueError: Expected each embedding in the embeddings to be a list, got [None] | https://api.github.com/repos/langchain-ai/langchain/issues/16415/comments | 3 | 2024-01-22T23:39:40Z | 2024-05-10T06:52:03Z | https://github.com/langchain-ai/langchain/issues/16415 | 2,094,973,143 | 16,415 |
[
"hwchase17",
"langchain"
] | ### Feature request
I would like to learn how to modify the ingest.py to load multiple text file and include their source url.
### Motivation
Currently only one file is indexed.
### Your contribution
I know well Neo4J, LangChain but not LangServe. I can help once I get the concept.
@efriis and @tomasonjo original author | neo4j-advanced-rag multiple documents | https://api.github.com/repos/langchain-ai/langchain/issues/16412/comments | 8 | 2024-01-22T22:51:33Z | 2024-04-30T16:29:30Z | https://github.com/langchain-ai/langchain/issues/16412 | 2,094,912,530 | 16,412 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
class CustomLLM(LLM):
n: int
@property
def _llm_type(self) -> str:
return "custom"
def _call(
self,
prompt: str,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> str:
if stop is not None:
raise ValueError("stop kwargs are not permitted.")
return prompt[: self.n]
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {"n": self.n}
```
### Description
following the code here: https://python.langchain.com/docs/modules/model_io/llms/custom_llm
get the following error:
AttributeError: module 'langchain' has no attribute 'debug'
### System Info
langchain-0.0.147
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | AttributeError: module 'langchain' has no attribute 'debug' | https://api.github.com/repos/langchain-ai/langchain/issues/16406/comments | 2 | 2024-01-22T21:12:09Z | 2024-04-29T16:15:51Z | https://github.com/langchain-ai/langchain/issues/16406 | 2,094,760,471 | 16,406 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Impossible to access `system_fingerprint` from OpenAI responses.
see: https://github.com/langchain-ai/langchain/discussions/13170#discussioncomment-8211745 | Expose complete response metadata from chat model via .invoke/.batch/.stream | https://api.github.com/repos/langchain-ai/langchain/issues/16403/comments | 4 | 2024-01-22T19:45:53Z | 2024-06-23T16:09:30Z | https://github.com/langchain-ai/langchain/issues/16403 | 2,094,630,738 | 16,403 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Same as #16295 for `@beta` decorator | preserve inspect.iscoroutinefunction with @beta decorator | https://api.github.com/repos/langchain-ai/langchain/issues/16402/comments | 2 | 2024-01-22T19:35:07Z | 2024-01-31T19:15:39Z | https://github.com/langchain-ai/langchain/issues/16402 | 2,094,615,439 | 16,402 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
OpenAI function-calling doesn't support function names with spaces. Need to update all Tool names to be snake_cased so that they work as OpenAI functions by default. See #16395 for example fix. | Make all Tool names snake_case | https://api.github.com/repos/langchain-ai/langchain/issues/16396/comments | 1 | 2024-01-22T18:12:45Z | 2024-01-26T22:10:10Z | https://github.com/langchain-ai/langchain/issues/16396 | 2,094,478,715 | 16,396 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
# Define the path to the directory containing the PDF files (example_data folder).
directory = "/content/test.pdf"
# Function to load documents from the specified directory.
def load_docs(directory):
# Create an instance of the DirectoryLoader with the provided directory path.
loader = PyPDFLoader(directory)
# Use the loader to load the documents from the directory and store them in ''documents''.
documents = loader.load_and_split()
# Return the loaded documents.
return documents
# Call the split_docs function to break the loaded documents into chunks.
# The chunk_size and chunk_overlap parameters can be adjusted based on specific requirements.
docs = load_docs(directory)
strings = []
for doc in docs:
strings.append(doc.page_content)
bedrock_embeddings = BedrockEmbeddings(model_id=modelId,
client=bedrock_runtime)
embeddings = bedrock_embeddings.embed_documents(strings)
```
### Description
When trying to run this code, the embeddings return as None. I have added the correct info to my AWS account, and no error pops up
### System Info
Google Colab
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [x] Document Loaders
- [x] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Creating embeds from bedrock | https://api.github.com/repos/langchain-ai/langchain/issues/16394/comments | 1 | 2024-01-22T18:04:37Z | 2024-04-30T16:13:20Z | https://github.com/langchain-ai/langchain/issues/16394 | 2,094,466,192 | 16,394 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_community.llms import GPT4All
callbacks = [StreamingStdOutCallbackHandler()]
llm = GPT4All(model=model_path, callbacks=callbacks)
```
### Description
Previously it was possible to enable streaming the answer of a GPT4all model, but now it does not work anymore.
In the model source there is a `streaming` attribute declared at the class level, but it's not used anywere.
If I edit the source manually to add `streaming` as a valid parameter, I can make it work again by doing GPT4All(model=model_path, callbacks=callbacks, streaming=True)
### System Info
langchain==0.1.1
langchain-community==0.0.13
langchain-core==0.1.13
Debian Sid
Python 3.10.4
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Streaming broken for GPT4all | https://api.github.com/repos/langchain-ai/langchain/issues/16389/comments | 2 | 2024-01-22T17:07:32Z | 2024-01-22T17:54:20Z | https://github.com/langchain-ai/langchain/issues/16389 | 2,094,367,590 | 16,389 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
@chain
def answer_call(Any) -> Any:
if read_persist_var("chat_mode_choose") == chatmode[0]:
base_answer = RunnableWithMessageHistory(
prompt_b | llm,
RedisChatMessageHistory,
input_messages_key="input",
history_messages_key="history",
)
return base_answer
### Description
like i say before, i want to extract answer, because i use it in the gradio chatbot, the format not match ( context='xxx' ), i try a lot, re module, .content attribute, noone work. this is really important to me, if you kan help, thanks a lot.
### System Info
Python 3.9.18
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | I can't get only ai answer via RunnableWithMessageHistory ( it always come with context= ) | https://api.github.com/repos/langchain-ai/langchain/issues/16386/comments | 2 | 2024-01-22T16:44:24Z | 2024-01-24T13:28:16Z | https://github.com/langchain-ai/langchain/issues/16386 | 2,094,318,035 | 16,386 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Callback propagation is failing when creating a tool from .invoke signatures:
https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/tools.py#L516
```
Tool(
name="some_tool",
func=some_runnable.invoke,
coroutine=some_runnable.ainvoke,
description="cats in the park",
return_direct=True
)
```
1) Callbacks will not be propagated properly because `.invoke` and `.ainvoke` do not have a `callbacks` parameter instead they have `.config`
2) We should instead create a nice way to create a tool from an existing runnable. | Create on-ramp for tools from runnables | https://api.github.com/repos/langchain-ai/langchain/issues/16381/comments | 1 | 2024-01-22T16:16:52Z | 2024-04-30T16:13:20Z | https://github.com/langchain-ai/langchain/issues/16381 | 2,094,267,590 | 16,381 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
https://python.langchain.com/docs/modules/callbacks/#when-do-you-want-to-use-each-of-these
### Idea or request for content:
Document how to pass callbacks via .invoke | DOC: Document how to pass callbacks with Runnable methods (e.g., .invoke / .batch) | https://api.github.com/repos/langchain-ai/langchain/issues/16379/comments | 1 | 2024-01-22T16:07:04Z | 2024-04-30T16:15:00Z | https://github.com/langchain-ai/langchain/issues/16379 | 2,094,249,309 | 16,379 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
tools = [get_news, solve_math_problem]
agent = ZeroShotAgent()
agent_executor = AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
verbose=False,
max_iterations=1
early_stopping_method="generate",
return_intermediate_steps=True,
handle_parsing_errors=True,
)
result = agent_executor.invoke(
input={
"input": user_query,
},
)
```
### Description
We find that if the agent reaches the maximum steps, but it still wants to use tools to solve question, then the final answer will become the input for the next step, rather than a real final answer. This problem cannot be solved just by setting the right parameters.
**e.g.
Input:**
```News Tesla? and root of 18376?```
**Output:**
```
('Action: solve_math_problem\nAction Input: square root of 18376',
[{'input': 'Tesla',
'output': "Tesla CEO Elon Musk has expressed ...",
'tool': 'get_news'}])
```
We set ```max_iteration=1``` just because of easily reproducing this error.
### System Info
We check the function for generating the final answer and printed out the input and output.

**new_inputs:**
```python
{
'agent_scratchpad':
'Thought: The question has two parts: one is about the latest news on Tesla, and the other is about the root of 18376. I can use the get_news tool to find the latest news about Tesla, and the solve_math_problem tool to find the square root of 18376.\n
Action: get_news \nAction Input: Tesla\n
Observation: {
\'input\': \'Tesla\',
\'output\': "Tesla\'s CEO Elon Musk has sparked speculation about his ownership stake in the company after expressing
his reluctance to develop Tesla into a leader in artificial intelligence (AI) and robotics …",
}\n
Thought:\n\nI now need to return a final answer based on the previous steps:',
'stop': ['\nObservation:', '\n\tObservation:']
}
```
**full_output:**
```python
: Action: solve_math_problem Action Input: root of 18376
```
My guess is that in agent_scratchpad prompt has clear information that ` The question has two parts ... ` and doesn’t include the actual `user query News Tesla? and root of 18376?` Therefore, LLM may be confused about the meaning of final answer. Is final answer for initial user query or for previous steps?
Please review this issue. Thank you!
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | early_stopping_method parameter of AgentExecutor doesn’t work in expected way | https://api.github.com/repos/langchain-ai/langchain/issues/16374/comments | 4 | 2024-01-22T15:04:11Z | 2024-02-05T15:16:17Z | https://github.com/langchain-ai/langchain/issues/16374 | 2,094,121,411 | 16,374 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
The following code
'''
from langchain.agents import create_openai_tools_agent
from langchain_community.utilities.bing_search import BingSearchAPIWrapper
from langchain_community.tools.bing_search.tool import BingSearchRun, BingSearchResults
def create_new_bing_search_agent_function_openai(llm):
bing_search = BingSearchAPIWrapper(bing_search_url="xxx", bing_subscription_key="xxx", k=4)
bing_tool = BingSearchResults(num_results=1, api_wrapper=bing_search)
tools_bing = [bing_tool]
prompt = ChatPromptTemplate.from_messages(
[
("system", "Have a conversation with a human. You are a helpful assistant who retrieves information from the Bing Search (the internet)."),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
agent = create_openai_functions_agent(llm, tools_bing, prompt)
agent_executor = AgentExecutor(tools=tools_bing, agent=agent)
return agent_executor
agent_chain_bing = create_new_bing_search_agent_function_openai(llm)
output = agent_chain_bing.invoke({"input": "What is stock price of Apple?"})
output['output']
'''
Gives this error:
BadRequestError: Error code: 400 - {'error': {'message': "'Bing Search Results JSON' does not match '^[a-zA-Z0-9_-]{1,64}$' - 'functions.0.name'", 'type': 'invalid_request_error', 'param': None, 'code': None}}
The same error appears if I define the agent like this:
agent = create_openai_tools_agent(llm, tools_bing, prompt)
Am I using this agent wrong? Is there some other way how i can use bing search in an agent?
### Description
langchain==0.1.1
### System Info
model GPT4 32k
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | 'Bing Search Results JSON' does not match '^[a-zA-Z0-9_-]{1,64}$' - 'functions.0.name' with new langchain 0.1 when using BingSearchResults into agent | https://api.github.com/repos/langchain-ai/langchain/issues/16368/comments | 19 | 2024-01-22T12:38:20Z | 2024-01-25T08:19:17Z | https://github.com/langchain-ai/langchain/issues/16368 | 2,093,826,511 | 16,368 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
below is my code:
elif any(file_path.lower().endswith(f".{img_type}") for img_type in image_types):
loader=UnstructuredImageLoader(file_path)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
texts = text_splitter.split_documents(documents=document)
```
### Description
below is my error i am getting while using UnstructuredImage Loader
File "/home/hs/env/lib/python3.8/site-packages/unstructured/partition/pdf.py", line 263, in _partition_pdf_or_image_local
layout = process_file_with_model(
File "/home/hs/env/lib/python3.8/site-packages/unstructured_inference/inference/layout.py", line 377, in process_file_with_model
model = get_model(model_name, **kwargs)
TypeError: get_model() got an unexpected keyword argument 'ocr_languages'
### System Info
dell 7480 latitude
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | getting error while using UnstructuredImageLoader | https://api.github.com/repos/langchain-ai/langchain/issues/16366/comments | 2 | 2024-01-22T11:52:25Z | 2024-01-22T17:51:28Z | https://github.com/langchain-ai/langchain/issues/16366 | 2,093,742,868 | 16,366 |
[
"hwchase17",
"langchain"
] | ### Feature request
I have different chains such as vectorchain,graphchain & customchain .
i have created rule-based system. when user write query it should pass to agent & know which llchain should we need to select out of all chain.
### Motivation
im building rag app for production.i cant use openai so looking for open source llms mistral etc.
when user write query it should smartly select llmchains .
### Your contribution
love to work on this chain | Calling different LLMchains into agent for opensource models such as mistral lama etc | https://api.github.com/repos/langchain-ai/langchain/issues/16364/comments | 1 | 2024-01-22T11:39:47Z | 2024-04-30T16:21:54Z | https://github.com/langchain-ai/langchain/issues/16364 | 2,093,720,478 | 16,364 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
below is my code
```
elif file_path.lower().endswith(".docx") or file_path.lower().endswith(".doc"):
docx_loader = UnstructuredWordDocumentLoader(file_path, mode="elements")
docx_document = docx_loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=400, chunk_overlap=0)
texts = text_splitter.split_documents(documents=docx_document)
print(docx_document,"***************************")
```
Below is the error I am getting:
File "/home/hs/env/lib/python3.8/site-packages/langchain_community/vectorstores/chroma.py", line 742, in from_texts
chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
File "/home/hs/env/lib/python3.8/site-packages/langchain_community/vectorstores/chroma.py", line 309, in add_texts
raise ValueError(e.args[0] + "\n\n" + msg)
ValueError: Expected metadata value to be a str, int, float or bool, got ['Deepak Kumar'] which is a <class 'list'>
Try filtering complex metadata from the document using langchain_community.vectorstores.utils.filter_complex_metadata.
### Idea or request for content:
_No response_ | getting error while integrating UnstructuredWordDocumentLoader | https://api.github.com/repos/langchain-ai/langchain/issues/16363/comments | 2 | 2024-01-22T11:25:29Z | 2024-01-22T17:52:47Z | https://github.com/langchain-ai/langchain/issues/16363 | 2,093,695,219 | 16,363 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
For the docs [Web scraping](https://python.langchain.com/docs/use_cases/web_scraping).
The below examples mapped shows "Page Not Found" Error
1. [AsyncHtmlLoader](https://python.langchain.com/docs/use_cases/docs/integrations/document_loaders/async_html)
2. [AsyncChromiumLoader](https://python.langchain.com/docs/use_cases/docs/integrations/document_loaders/async_chromium)
3. [HTML2Text](https://python.langchain.com/docs/use_cases/docs/integrations/document_transformers/html2text)
4. [WebResearchRetriever](https://python.langchain.com/docs/modules/data_connection/retrievers/web_research) -> Already Mentioned in the ISSUE #16241
Ideally URLS should be
1. [AsyncHtmlLoader](https://python.langchain.com/docs/integrations/document_loaders/async_html)
2. [AsyncChromiumLoader](https://python.langchain.com/docs/integrations/document_loaders/async_chromium)
3. [HTML2Text](https://python.langchain.com/docs/integrations/document_transformers/html2text)
4. Notebook Not found in the docs
### Idea or request for content:
_No response_ | Mismatch in Mapping Notebook URLs in Web scraping Docs | https://api.github.com/repos/langchain-ai/langchain/issues/16361/comments | 1 | 2024-01-22T11:15:31Z | 2024-01-22T17:43:17Z | https://github.com/langchain-ai/langchain/issues/16361 | 2,093,677,375 | 16,361 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
prompt_template = """Use the following pieces of context to answer the question at the end. Try to answer in a structured way. Write your answer in HTML format but do not include ```html ```. Put words in bold that directly answer your question.
If you don't know the answer, just say 'I am sorry I dont know the answer to this question or you dont have access to the files needed to answer the question.' Don't try to make up an answer.
{summaries}
Question: {question}.
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["summaries", "question"]
)
memory = ConversationBufferWindowMemory(
k=5, memory_key="chat_history", return_messages=True, output_key="answer"
)
for i in range(0, int(len(chat_history) / 2)):
memory.save_context(
{"input": chat_history[i * 2]}, {"answer": chat_history[(i * 2) + 1]}
)
chain = RetrievalQAWithSourcesChain.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
memory= memory,
chain_type_kwargs={
"prompt": PromptTemplate(
template=PROMPT,
input_variables=["summaries", "question"],
),
},
)
```
### Description
I want to use RetrievalQAWithSourcesChain to generate an answer and the relevant sources from the retriever. However with this code I am getting this error:
. System.Private.CoreLib: Result: Failure
Exception: KeyError: 'template'
prompt.py", line 146, in template_is_valid
values["template"], values["template_format"]
### System Info
langchain==0.1.0
langsmith==0.0.80
langchainhub==0.1.14
langchain-community==0.0.12
openai==1.7.2
azure-identity == 1.13.0
azure-core ==1.28.0
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | How to use RetrievalQAWithSourcesChain with a custom prompt | https://api.github.com/repos/langchain-ai/langchain/issues/16356/comments | 2 | 2024-01-22T09:33:44Z | 2024-05-15T16:07:01Z | https://github.com/langchain-ai/langchain/issues/16356 | 2,093,482,579 | 16,356 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
The following code:
'''
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain.agents.agent import AgentOutputParser
from langchain.agents.chat.output_parser import ChatOutputParser
def creat_ai_search_new_agent(embeddings, llm, class_name_rich):
ai_search_endpoint = get_ai_search_endpoint()
ai_search_admin_key = get_ai_search_admin_key()
vector_store = AzureSearch(
azure_search_endpoint=xxx,
azure_search_key=xxx,
index_name=xxx,
embedding_function=embeddings.embed_query,
content_key=xxx
)
"""Retriever that uses `Azure Cognitive Search`."""
azure_search_retriever = AzureSearchVectorStoreRetriever(
vectorstore=vector_store,
search_type="hybrid",
k=4,
top=10
)
retriever_tool = create_retriever_tool(
azure_search_retriever,
"Retriever",
"Useful when you need to retrieve information from documents",
)
class Response(BaseModel):
"""Final response to the question being asked"""
answer: str = Field(description="The final answer to respond to the user")
)
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant who retrieves information from documents"),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
llm_with_tools = llm.bind(
functions=[
# The retriever tool
format_tool_to_openai_function(retriever_tool),
# Response schema
convert_pydantic_to_openai_function(Response),
]
)
try:
agent = (
{
"input": lambda x: x["input"],
# Format agent scratchpad from intermediate steps
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
}
| prompt
| llm_with_tools
| OpenAIFunctionsAgentOutputParser()
)
agent_executor = AgentExecutor(tools=[retriever_tool], agent=agent, verbose=True, return_intermediate_steps=True,
handle_parsing_errors=True,
max_iterations=15,
)
except Exception as e:
print(e)
print("error instanciating the agent")
return agent_executor
'''
It gives you the error: Response is not a valid tool, try one of [Retriever]. and then after going into a loop reaches the agent limit. The final steps of the agent look like this:
Invoking: `Response` with `{'answer': "XXXXXXX", 'sources': [58, 15, 57, 29]}`
Response is not a valid tool, try one of [Retriever].
### Description
langchain == 0.1.1
openai==1.7.0
### System Info
Using model GPT-4 32K
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | 'Response is not a valid tool, try one of [Retriever].'), when using OpenAIFunctionsAgentOutputParser() | https://api.github.com/repos/langchain-ai/langchain/issues/16355/comments | 4 | 2024-01-22T09:29:42Z | 2024-05-02T16:05:54Z | https://github.com/langchain-ai/langchain/issues/16355 | 2,093,473,398 | 16,355 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
ask_chain = generate_ask_chain()
ner_chain = generate_ner_chain()
reasonable_chain = generate_resonable_chain()
overall_chain = generate_sequen_chain(ner_chain, reasonable_chain, ask_chain) # use SequentialChain
for chunk in overall_chain.stream({"profile": profile, "dialogue": dialogue,
"pair": pair, "question": question, "answer": answer}, return_only_outputs=True):
print(chunk.content, end="", flush=True)
### Description
我希望流式返回SequentialChain中处理后的输出的结果,但是dict好像不能被流式输出
AttributeError: 'dict' object has no attribute 'content'
### System Info
python3.11
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | AttributeError: 'dict' object has no attribute 'content' | https://api.github.com/repos/langchain-ai/langchain/issues/16354/comments | 4 | 2024-01-22T09:14:19Z | 2024-04-30T16:30:28Z | https://github.com/langchain-ai/langchain/issues/16354 | 2,093,434,497 | 16,354 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
import os
from langchain_community.chat_models import QianfanChatEndpoint
from langchain_core.language_models.chat_models import HumanMessage, AIMessage
os.environ["QIANFAN_AK"] = "my-ak"
os.environ["QIANFAN_SK"] = "my-sk"
chat = QianfanChatEndpoint(streaming=True)
messages = [HumanMessage(content="你叫小荔,是一个旅游向导,只会根据真实的信息提供攻略。你的攻略或建议必须真实且有效,并且详细描述涉及的地点"), AIMessage(content="明白"), HumanMessage(content="成都三日游")]
print(messages)
try:
for chunk in chat.stream(messages):
print(chunk)
except TypeError as e:
print("")
```
### Description
When using QianfanChatEndpoint with streaming enabled, it only returns two chunk of messages, which is not the full response. I followed the documentation and write the exact same code but only changed the prompt.

By disabling the streaming and using chat.invoke will return the full response.

Seems like a bug of the streaming
### System Info
python version 3.10.11
langchain version 0.1.1
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Using QianfanChatEndpoint with stream enabled only returns two chunk of messages. Disable stream and using invoke does not have this problem | https://api.github.com/repos/langchain-ai/langchain/issues/16352/comments | 4 | 2024-01-22T08:09:03Z | 2024-05-10T16:09:00Z | https://github.com/langchain-ai/langchain/issues/16352 | 2,093,323,353 | 16,352 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
This code works with langchain 0.1.0 and Azure Search Documents 11.4b9
However WIth Azure Search Documents 11.4.0 I get the error: ImportError: cannot import name 'Vector' from 'azure.search.documents.models'
```
with callbacks.collect_runs() as cb:
embeddings = AzureOpenAIEmbeddings(
azure_deployment=OPENAI_EMBEDDING_DEPLOYMENT_NAME,
openai_api_version="2023-05-15",
)
# Init vector store
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=SEARCH_SERVICE_ENPOINT,
azure_search_key=SEARCH_SERVICE_ADMIN_KEY,
index_name=index_name,
embedding_function=embeddings.embed_query,
)
llm = AzureChatOpenAI(
azure_deployment=OPENAI_DEPLOYMENT_ENDPOINT1, openai_api_version="2023-05-15"
)
# Should take `chat_history` and `question` as input variables.
condense_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question. If you do not know the answer reply with 'I am sorry'.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:"""
CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(condense_template)
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
prompt_template = """Use the following pieces of context to answer the question at the end. Try to answer in a structured way. Write your answer in HTML format but do not include ```html ```. Put words in bold that directly answer your question.
If you don't know the answer, just say 'I am sorry I dont know the answer to this question or you dont have access to the files needed to answer the question.' Don't try to make up an answer.
{context}
Question: {question}.
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
doc_chain = load_qa_chain(llm, chain_type="stuff", prompt=PROMPT)
memory = ConversationBufferWindowMemory(
k=5, memory_key="chat_history", return_messages=True, output_key="answer"
)
for i in range(0, int(len(chat_history) / 2)):
memory.save_context(
{"input": chat_history[i * 2]}, {"answer": chat_history[(i * 2) + 1]}
)
chain = ConversationalRetrievalChain(
retriever=vector_store.as_retriever(),
combine_docs_chain=doc_chain,
question_generator=question_generator,
memory=memory,
return_source_documents=True,
)
result = chain({"question": user_question})
run_id = str(cb.traced_runs[0].id)
return result, run_id
```
### Description
I am trying to use langchain with Azure Search
### System Info
I am using the following libraries:
langchain==0.1.0
langsmith==0.0.83
langchainhub==0.1.14
langchain-community==0.0.12
langchain-openai==0.0.3
azure-search-documents==11.4.0
openai==1.7.2
azure-identity == 1.13.0
azure-core ==1.28.0
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | ImportError: cannot import name 'Vector' from 'azure.search.documents.models' | https://api.github.com/repos/langchain-ai/langchain/issues/16351/comments | 2 | 2024-01-22T06:55:27Z | 2024-01-22T09:27:46Z | https://github.com/langchain-ai/langchain/issues/16351 | 2,093,219,400 | 16,351 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
test_template = """Rephrase the query and output in json format. Here is an example:
###
query: hello world.
output: {"rephrased_query": "hello my world."}
###
query: {question}
output:"""
test_query_prompt = PromptTemplate(
input_variables=["question"],
template=test_template
)
test_query_prompt.input_variables
```
['"rephrased_query"', 'question']
### Description
I try to initialize a prompt requesting an output in json format. As it is shown in example code. When Json example appears in template, seems like it will automatically generate input_varialbes from template instead of the one I give.
### System Info
langchain-0.1.0
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | can't initialize the PromptTemplate with input_variables correctly | https://api.github.com/repos/langchain-ai/langchain/issues/16349/comments | 1 | 2024-01-22T05:51:46Z | 2024-01-22T16:20:16Z | https://github.com/langchain-ai/langchain/issues/16349 | 2,093,140,496 | 16,349 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
import os
import qdrant_client
from dotenv import load_dotenv
from langchain.chains import RetrievalQA
from langchain.vectorstores import Qdrant
from langchain_community.chat_models import ChatOpenAI
from langchain_community.embeddings import HuggingFaceBgeEmbeddings
from langchain_core.documents import Document
from langchain_core.prompts import PromptTemplate, ChatPromptTemplate
os.environ['OPENAI_API_KEY'] = "key"
template = """你是一位律師,態度非常高傲.
Question: {question}
Context: {context}
Answer:
"""
prompt = ChatPromptTemplate.from_template(template)
print(prompt)
def get_vector_store():
client = qdrant_client.QdrantClient(
os.getenv('QDRANT_HOST'),
)
embeddings = HuggingFaceBgeEmbeddings(
model_name="BAAI/bge-large-zh-v1.5",
)
# embeddings = SentenceTransformer(model_name="maidalun1020/bce-embedding-base_v1")
vector_store = Qdrant(
client=client,
collection_name=os.getenv('QDRANT_COLLECTION_NAME'),
embeddings=embeddings,
)
return vector_store
def main():
load_dotenv()
vectorstore = get_vector_store()
qa = RetrievalQA.from_chain_type(
llm=ChatOpenAI(
temperature=0.7,
max_tokens=100,
model=os.getenv('QDRANT_MODEL_NAME'),
),
chain_type="stuff",
retriever=vectorstore.as_retriever(search_type="similarity_score_threshold",
search_kwargs={"score_threshold": 0.7, "k": 100},
collection_name=os.getenv('QDRANT_COLLECTION_NAME'),
model_kwargs={"prompt": prompt}
# memory=memory,
),
)
while True:
# qa.load_memory_variables({"chat_history"})
documents = []
question = input("冒險者:")
document = Document(page_content=question, metadata={'source': 'user'})
documents.append(document)
answer = qa.invoke(question)
print(answer)
vectorstore.add_documents([document])
if question == "bye":
break
if __name__ == "__main__":
main()
### Idea or request for content:
Why can't I retrieve the prompt I set for the OpenAI prompt engine? Additionally, how can I incorporate memory so that OpenAI can remember what I say during conversations? | Why can't I retrieve the prompt I set for the OpenAI prompt engine? Additionally, how can I incorporate memory so that OpenAI can remember what I say during conversations? | https://api.github.com/repos/langchain-ai/langchain/issues/16345/comments | 1 | 2024-01-22T01:43:36Z | 2024-04-29T16:12:11Z | https://github.com/langchain-ai/langchain/issues/16345 | 2,092,912,993 | 16,345 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
A simplified example pulled almost straight from [here](https://python.langchain.com/docs/integrations/document_transformers/html2text), but fails on the walmart.com page for some reason.
```
from langchain_community.document_loaders import AsyncHtmlLoader
from langchain_community.document_transformers import Html2TextTransformer
urls = ['https://www.walmart.com/shop/deals']
loader = AsyncHtmlLoader(urls)
docs = loader.load()
html2text = Html2TextTransformer()
docs_transformed = html2text.transform_documents(docs)
print(docs_transformed[0].page_content)
```
### Description
* AsyncHtmlLoader fails to load https://www.walmart.com/shop/deals, but works for other urls I tested
* I search for the error, but couldn't find documentation on how I'd avoid the issue with AsyncHtmlLoader
* I would expect AsyncHtmlLoader to never fail to load a webpage due to a technical error. I could see if the request was blocked in some way or another
Error:
```
(crewai) Nicks-Macbook-Pro-4:crewai nroth$ /opt/miniconda3/envs/crewai/bin/python /Users/nroth/workspace/crewai/html2text_example.py
Fetching pages: 0%| | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/langchain_community/document_loaders/async_html.py", line 206, in load
asyncio.get_running_loop()
RuntimeError: no running event loop
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/aiohttp/client_reqrep.py", line 965, in start
message, payload = await protocol.read() # type: ignore[union-attr]
^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/aiohttp/streams.py", line 622, in read
await self._waiter
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/aiohttp/client_proto.py", line 224, in data_received
messages, upgraded, tail = self._parser.feed_data(data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "aiohttp/_http_parser.pyx", line 557, in aiohttp._http_parser.HttpParser.feed_data
File "aiohttp/_http_parser.pyx", line 732, in aiohttp._http_parser.cb_on_header_value
aiohttp.http_exceptions.LineTooLong: 400, message:
Got more than 8190 bytes (9515) when reading Header value is too long.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/nroth/workspace/crewai/html2text_example.py", line 9, in <module>
docs = loader.load()
^^^^^^^^^^^^^
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/langchain_community/document_loaders/async_html.py", line 213, in load
results = asyncio.run(self.fetch_all(self.web_paths))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/crewai/lib/python3.12/asyncio/runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/crewai/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/crewai/lib/python3.12/asyncio/base_events.py", line 684, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/langchain_community/document_loaders/async_html.py", line 189, in fetch_all
return await tqdm_asyncio.gather(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/tqdm/asyncio.py", line 79, in gather
res = [await f for f in cls.as_completed(ifs, loop=loop, timeout=timeout,
^^^^^^^
File "/opt/miniconda3/envs/crewai/lib/python3.12/asyncio/tasks.py", line 631, in _wait_for_one
return f.result() # May raise f.exception().
^^^^^^^^^^
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/tqdm/asyncio.py", line 76, in wrap_awaitable
return i, await f
^^^^^^^
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/langchain_community/document_loaders/async_html.py", line 177, in _fetch_with_rate_limit
return await self._fetch(url)
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/langchain_community/document_loaders/async_html.py", line 148, in _fetch
async with session.get(
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/aiohttp/client.py", line 1187, in __aenter__
self._resp = await self._coro
^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/aiohttp/client.py", line 601, in _request
await resp.start(conn)
File "/opt/miniconda3/envs/crewai/lib/python3.12/site-packages/aiohttp/client_reqrep.py", line 967, in start
raise ClientResponseError(
aiohttp.client_exceptions.ClientResponseError: 400, message='Got more than 8190 bytes (9515) when reading Header value is too long.', url=URL('https://www.walmart.com/shop/deals')
```
### System Info
I was using this after installing the latest version of crewai, so my langchain version might not be the absolute latest.
```
python --version
Python 3.12.1
```
```
pip freeze | grep langchain
langchain==0.1.1
langchain-community==0.0.13
langchain-core==0.1.13
langchain-openai==0.0.2.post1
```
```
system_profiler SPSoftwareDataType SPHardwareDataType
Software:
System Software Overview:
System Version: macOS 12.5.1 (21G83)
Kernel Version: Darwin 21.6.0
Secure Virtual Memory: Enabled
System Integrity Protection: Enabled
Hardware:
Hardware Overview:
Model Name: MacBook Pro
Model Identifier: MacBookPro16,1
Processor Name: 8-Core Intel Core i9
Processor Speed: 2.3 GHz
Number of Processors: 1
Total Number of Cores: 8
L2 Cache (per Core): 256 KB
L3 Cache: 16 MB
Hyper-Threading Technology: Enabled
Memory: 32 GB
```
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async | Header value is too long error when using AsyncHtmlLoader | https://api.github.com/repos/langchain-ai/langchain/issues/16343/comments | 3 | 2024-01-21T18:18:14Z | 2024-04-28T16:17:59Z | https://github.com/langchain-ai/langchain/issues/16343 | 2,092,704,220 | 16,343 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
I'm attempting to use RAG-Fusion with retriever as OpenSearchRetriever:
```python
retriever = OpenSearchRetriever(...)
...
query_chain = generate_queries | retriever.map() | reciprocal_rank_fusion
```
### Description
It seems the OpenSearchRetriever does not have a `.map` attribute, so it can't be used with RAG Fusion?
### System Info
LangChain 0.1.0
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | 'OpenSearchRetriever' object has no attribute 'map' | https://api.github.com/repos/langchain-ai/langchain/issues/16342/comments | 1 | 2024-01-21T17:38:31Z | 2024-01-22T15:19:30Z | https://github.com/langchain-ai/langchain/issues/16342 | 2,092,686,610 | 16,342 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
When I try to create a Gemini model using the built-in tools as follows, it results in an error.
```
llm = ChatVertexAI(model_name="gemini-pro")
sqlalchemy_uri = f"bigquery://{gcp_project_id}/{gcp_dataset_id}"
db = SQLDatabase.from_uri(sqlalchemy_uri)
tools = SQLDatabaseToolkit(db=db, llm=llm).get_tools()
llm_with_tools = llm.bind(functions=tools)
llm_with_tools.invoke("list tables")
```
Error Message
```
"name": "ValueError",
"message": "Value not declarable with JSON Schema, field: name='_callbacks_List[langchain_core.callbacks.base.BaseCallbackHandler]' type=BaseCallbackHandler required=True",
```
### Description
I want to use the built-in tools with the model from the langchain_google_vertexai library.
In the Gemini version of ChatVertexAI, when generating text (`_generate()`), it seems to be expected that the Tool bound to the model and given to functions will be converted to VertexAI format using `_format_tools_to_vertex_tool()`.
However, the current code fails to do this.
It seems that the issue might be with the branch` if is instance(tool, Tool) `in the following code.
https://github.com/langchain-ai/langchain/blob/master/libs/partners/google-vertexai/langchain_google_vertexai/functions_utils.py#L77-L89
Similar to the conversion function for OpenAI (`format_tool_to_openai_tool()`), I believe that BaseTool should be used instead of Tool.
### System Info
- Python 3.9.16
- langchain 0.0.354
- langchain-community 0.0.13
- langchain-core 0.1.13
- langchain-google-genai 0.0.4
- langchain-google-vertexai 0.0.2
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | When using the FunctionCalling feature with Gemini, built-in tools cannot be utilized. | https://api.github.com/repos/langchain-ai/langchain/issues/16340/comments | 1 | 2024-01-21T13:21:26Z | 2024-04-28T16:21:10Z | https://github.com/langchain-ai/langchain/issues/16340 | 2,092,583,872 | 16,340 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
``` Python
# Creating Embdeddings of the sentences and storing it into Graph DB
from langchain_community.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-base-en-v1.5"
model_kwargs = {"device": "cpu"}
encode_kwargs = {"normalize_embeddings": True}
embeddings = HuggingFaceBgeEmbeddings(
model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs
)
```
``` Python
from langchain.graphs import Neo4jGraph
graph = Neo4jGraph(
url=os.environ["NEO4J_URI"],
username=os.environ["NEO4J_USERNAME"],
password=os.environ["NEO4J_PASSWORD"]
)
```
``` Python
from neo4j import GraphDatabase
uri = os.environ["NEO4J_URI"]
username = os.environ["NEO4J_USERNAME"]
password = os.environ["NEO4J_PASSWORD"]
driver = GraphDatabase.driver(uri, auth=(username, password))
session = driver.session()
result = session.run("SHOW VECTOR INDEXES")
for record in result:
print(record)
```
``` Python
# Instantiate Neo4j vector from documents
neo4j_vector = Neo4jVector.from_documents(
documents,
HuggingFaceBgeEmbeddings(),
name="graph_qa_index",
url=os.environ["NEO4J_URI"],
username=os.environ["NEO4J_USERNAME"],
password=os.environ["NEO4J_PASSWORD"]
)
```
### Description
``` Python
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-26-b09e1b2ff4ef>](https://localhost:8080/#) in <cell line: 2>()
1 # Instantiate Neo4j vector from documents
----> 2 neo4j_vector = Neo4jVector.from_documents(
3 documents,
4 HuggingFaceBgeEmbeddings(),
5 url=os.environ["NEO4J_URI"],
2 frames
[/usr/local/lib/python3.10/dist-packages/langchain_community/vectorstores/neo4j_vector.py](https://localhost:8080/#) in __from(cls, texts, embeddings, embedding, metadatas, ids, create_id_index, search_type, **kwargs)
445 # If the index already exists, check if embedding dimensions match
446 elif not store.embedding_dimension == embedding_dimension:
--> 447 raise ValueError(
448 f"Index with name {store.index_name} already exists."
449 "The provided embedding function and vector index "
ValueError: Index with name vector already exists.The provided embedding function and vector index dimensions do not match.
Embedding function dimension: 1024
Vector index dimension: 768
```
**The embedding model utilized in `HuggingFaceBgeEmbeddings` is denoted as `BAAI/bge-base-en-v1.5`, possessing an embedding dimension of `768`. This specification ostensibly aligns with the vector store index dimension of `768`. Nevertheless, upon execution of the provided code, a dimension mismatch error is encountered despite the apparent alignment.**
### System Info
``` YAML
Python version: 3.10.10
Operating System: Windows 11
Windows: 11
pip == 23.3.1
python == 3.10.10
long-chain == 0.1.0
transformers == 4.36.2
sentence_transformers == 2.2.2
unstructured == 0.12.0
```
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Seeking Assistance: Incompatibility of Vector Index Model and Embedding Function Dimensions in Neo4j | https://api.github.com/repos/langchain-ai/langchain/issues/16336/comments | 2 | 2024-01-21T10:01:30Z | 2024-04-30T16:19:37Z | https://github.com/langchain-ai/langchain/issues/16336 | 2,092,511,069 | 16,336 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
-
### Description
https://github.com/langchain-ai/langchain/blob/ef75bb63ce5cc4fb76ba1631ebe582f56103ab7e/libs/langchain/langchain/agents/json_chat/base.py#L151
this seems useless because chat-json depends json to work,not this sort of completion
### System Info
-
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Is `llm.bind(stop=["\nObservation"])` really meanful in `json_chat` agnet? | https://api.github.com/repos/langchain-ai/langchain/issues/16334/comments | 6 | 2024-01-21T08:39:34Z | 2024-04-07T15:16:42Z | https://github.com/langchain-ai/langchain/issues/16334 | 2,092,484,424 | 16,334 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
I am trying to follow along with the examples at
https://python.langchain.com/docs/expression_language/cookbook/sql_db
Everything is tracking until `full_chain.invoke({"question": quest})`.
(https://github.com/langchain-ai/langchain/blob/3d23a5eb36045db3b7a05c34947b74bd4909ba3b/docs/docs/expression_language/cookbook/sql_db.ipynb#L162)
the error I get suggests that the `query` being formed for sqlalchemy is not actually the query but some string starting "To answer this question,..."
> sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) near "To": syntax error
> [SQL: To answer this question, we can run a query that retrieves the number of employees from the `Employee` table. Here is an example query:
> ```
> SELECT COUNT(*) FROM Employee;
> ```
(full error stack below)
I also found the examples here
https://python.langchain.com/docs/use_cases/qa_structured/sql
but get basically the same errors from the
`db_chain = SQLDatabaseChain.from_llm(model, db, verbose=True)`
( https://github.com/langchain-ai/langchain/blob/ef75bb63ce5cc4fb76ba1631ebe582f56103ab7e/docs/docs/use_cases/qa_structured/sql.ipynb#L94)
(full error stack below)
possibly related issues: #11870, #15077
any guesses why that would be? Or how to debug further? Thanks!
### Description
See above
## Full error#1
sql1 To answer this question, we can use a SELECT statement to retrieve the number of employees from the `Employee` table. Here's an example query:
```
SELECT COUNT(*) FROM Employee;
```
This will return the number of rows in the `Employee` table, which is equal to the number of employees.
Answer: There are 3 employees.
Traceback (most recent call last):
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
cursor.execute(statement, parameters)
sqlite3.OperationalError: near "To": syntax error
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File ".../Code/eclipse/ai4law/src/lc-rag.py", line 207, in <module>
main()
File ".../Code/eclipse/ai4law/src/lc-rag.py", line 202, in main
lc_cbSQL2(ollama,quest)
File ".../Code/eclipse/ai4law/src/lc-rag.py", line 175, in lc_cbSQL2
chainResult2 = full_chain.invoke({"question": quest})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1780, in invoke
input = step.invoke(
^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/runnables/passthrough.py", line 415, in invoke
return self._call_with_config(self._invoke, input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 981, in _call_with_config
context.run(
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/runnables/passthrough.py", line 402, in _invoke
**self.mapper.invoke(
^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2345, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2345, in <dictcomp>
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3080, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 981, in _call_with_config
context.run(
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2956, in _invoke
output = call_func_with_variable_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File ".../Code/eclipse/ai4law/src/lc-rag.py", line 170, in <lambda>
response=lambda x: db.run(x["query"]),
^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_community/utilities/sql_database.py", line 437, in run
result = self._execute(command, fetch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_community/utilities/sql_database.py", line 414, in _execute
cursor = connection.execute(text(command))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1416, in execute
return meth(
^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/sql/elements.py", line 517, in _execute_on_connection
return connection._execute_clauseelement(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1639, in _execute_clauseelement
ret = self._execute_context(
^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1848, in _execute_context
return self._exec_single_context(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1988, in _exec_single_context
self._handle_dbapi_exception(
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 2344, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) near "To": syntax error
[SQL: To answer the question "How many employees are there?" we can use a SELECT statement to retrieve the number of employees from the Employee table. Here's an example query:
```
SELECT COUNT(*) FROM Employee;
```
Explanation:
* `COUNT(*)` is a function that returns the number of rows in a table.
* `FROM Employee` specifies the table to retrieve the count from.
When we run this query, we should see the number of employees in the table. For example, if there are 3 employees in the table, the query will return `3`.]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
## Full error#2
.../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `run` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(
[1m> Entering new SQLDatabaseChain chain...[0m
How many employees are there?
SQLQuery:[32;1m[1;3mTo answer the question "How many employees are there?", we need to query the `Employee` table. The query would be:
```
SELECT COUNT(*) FROM Employee;
```
This will return the number of rows in the `Employee` table, which is the number of employees.[0mTraceback (most recent call last):
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
cursor.execute(statement, parameters)
sqlite3.OperationalError: near "To": syntax error
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File ".../Code/eclipse/ai4law/src/lc-rag.py", line 207, in <module>
main()
File ".../Code/eclipse/ai4law/src/lc-rag.py", line 204, in main
lcSQL(ollama,quest)
File ".../Code/eclipse/ai4law/src/lc-rag.py", line 180, in lcSQL
rv = db_chain.run(quest)
^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain/chains/base.py", line 538, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain/chains/base.py", line 363, in __call__
return self.invoke(
^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain/chains/base.py", line 162, in invoke
raise e
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_experimental/sql/base.py", line 201, in _call
raise exc
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_experimental/sql/base.py", line 146, in _call
result = self.database.run(sql_cmd)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_community/utilities/sql_database.py", line 437, in run
result = self._execute(command, fetch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/langchain_community/utilities/sql_database.py", line 414, in _execute
cursor = connection.execute(text(command))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1416, in execute
return meth(
^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/sql/elements.py", line 517, in _execute_on_connection
return connection._execute_clauseelement(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1639, in _execute_clauseelement
ret = self._execute_context(
^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1848, in _execute_context
return self._exec_single_context(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1988, in _exec_single_context
self._handle_dbapi_exception(
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 2344, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File ".../data/pkg/miniconda3/envs/ai4law/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) near "To": syntax error
[SQL: To answer the question "How many employees are there?", we need to query the `Employee` table. The query would be:
```
SELECT COUNT(*) FROM Employee;
```
This will return the number of rows in the `Employee` table, which is the number of employees.]
(Background on this error at: https://sqlalche.me/e/20/e3q8)
### System Info
langchain==0.1.1
langchain-community==0.0.13
langchain-core==0.1.12
langchain-experimental==0.0.49
python 3.11.7
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | SQL examples from cookbook and use_cases/qa_structured don't work for me? | https://api.github.com/repos/langchain-ai/langchain/issues/16331/comments | 2 | 2024-01-21T03:08:09Z | 2024-04-28T18:13:33Z | https://github.com/langchain-ai/langchain/issues/16331 | 2,092,399,367 | 16,331 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
from langchain_community.chat_models import ChatZhipuAI
model = ChatZhipuAI(
model="chatglm_turbo",
api_key="xxxx",
)
print(model.invoke("hello, what today is today?"))
```
### Description
## Problem description
I tried to call the chatglm_turbo model using ChatZhipuAI and found an error. See screenshot below for errors reported
## Reasons
1. the zhipuai library has been upgraded to 2.0.1,version 2.0 is not compatible with previous apis annd it looks like version 1.0 is deprecated
2. the current community code is not updated
### System Info
```shell
> pip list | grep zhipuai
zhipuai 2.0.1
```
<img width="1004" alt="image" src="https://github.com/langchain-ai/langchain/assets/30918004/076b377b-00cb-45aa-9813-ce7f6916204e">
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | community:ChatZhipuAI is currently not working well because the zhipuai library has been upgraded to 2.0.1 | https://api.github.com/repos/langchain-ai/langchain/issues/16330/comments | 2 | 2024-01-21T02:59:19Z | 2024-04-28T16:22:24Z | https://github.com/langchain-ai/langchain/issues/16330 | 2,092,390,036 | 16,330 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
The following code initializes the chatbot instance using ConversationalRetrievalChain with the 'return_source_documents' parameter:
```
def initialize_chatbot(index_name):
chatbot = ChatOpenAI(
openai_api_key=os.environ["OPENAI_API_KEY"],
model='gpt-3.5-turbo',
temperature=0.2
)
embeddings = OpenAIEmbeddings(openai_api_key=os.environ["OPENAI_API_KEY"])
vectorstore = Pinecone.from_existing_index(index_name, embeddings)
retriever = vectorstore.as_retriever()
memory = ConversationBufferWindowMemory(
k=10,
memory_key="chat_history",
return_messages=True
)
qa = ConversationalRetrievalChain.from_llm(
llm=chatbot,
retriever=retriever,
memory=memory,
return_source_documents=True
)
return qa
```
The following code runs a query:
```
def chat(query, qa):
response = qa(query)
print(response)
query = "what is the nutrition information for the boiled egg recipe?"
chat(query, chatbot)
```
The error I get:
```
File [langchain/chains/base.py:314], in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
[312] raise e
[313]run_manager.on_chain_end(outputs)
--> [314] final_outputs: Dict[str, Any] = self.prep_outputs(
[315] inputs, outputs, return_only_outputs
[316] )
[317] if include_run_info:
[318] final_outputs[RUN_KEY] = RunInfo(run_id=run_manager.run_id)
File [langchain/chains/base.py:410], in Chain.prep_outputs(self, inputs, outputs, return_only_outputs)
[408] self._validate_outputs(outputs)
[409] if self.memory is not None:
--> [410] self.memory.save_context(inputs, outputs)
[411] if return_only_outputs:
...
---> [29] raise ValueError(f"One output key expected, got {outputs.keys()}")
[30] output_key = list(outputs.keys())[0]
[31] else:
ValueError: One output key expected, got dict_keys(['answer', 'source_documents'])
```
### Description
I am trying to use the langchain library to return source documents using the ConversationalRetrievalChain. However I keep getting an error relating to it expecting only one output key. I looked into the code and found I think it is executing the `__call__` function, which was deprecated in langchain version 0.1.0, and it says it only expects one output key. I am using the most recent langchain version that pip allows (`pip install --upgrade langchain`), which is 0.1.1. How can I get this to execute properly?
Additional notes:
- I am using langchain-openai for ChatOpenAI and OpenAIEmbeddings
### System Info
"pip install --upgrade langchain"
Python 3.11.5
Langchain 1.1.0
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | return_source_documents does not work for ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/16323/comments | 4 | 2024-01-21T00:14:07Z | 2024-01-21T02:15:01Z | https://github.com/langchain-ai/langchain/issues/16323 | 2,092,284,846 | 16,323 |
[
"hwchase17",
"langchain"
] | ### Feature request
Hi I am building a chatbot that uses Vectordb to return the most up-to-date news.
how can I set the chain to retrieve the k documents vectors sorted by publish_date which is populated as a metadata field?
Here is how I define the chain:
` self.chain = RetrievalQAWithSourcesChain.from_chain_type(
llm=self.llm,
chain_type="stuff",
retriever=self.vector_db.db.as_retriever(search_type="similarity_score_threshold",
search_kwargs={"score_threshold": .4,
"k": 3}),
chain_type_kwargs=self.chain_type_kwargs,
return_source_documents=True,
verbose=True
)`
### Motivation
To retrieve the most up-to-date sources in the response
### Your contribution
Helping expand the library | Sort document option using RetrievalQAWithSourcesChain | https://api.github.com/repos/langchain-ai/langchain/issues/16320/comments | 3 | 2024-01-20T21:44:51Z | 2024-04-30T16:19:40Z | https://github.com/langchain-ai/langchain/issues/16320 | 2,092,250,536 | 16,320 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
https://python.langchain.com/docs/use_cases/graph/graph_falkordb_qa
Could you please provide the logic to Visualize the Graph created using FalkorDB in the documentation. For example llamaindex has specified the below code for visualizing the graph:
```
## create graph
from pyvis.network import Network
g = index.get_networkx_graph()
net = Network(notebook=True, cdn_resources="in_line", directed=True)
net.from_nx(g)
net.show("falkordbgraph_draw.html")
```
### Idea or request for content:
Please provide the logic to visualize graph in the FalkorDBQAChain documentation(https://python.langchain.com/docs/use_cases/graph/graph_falkordb_qa) | How to Visualize the Graph created using FalkorDB ? | https://api.github.com/repos/langchain-ai/langchain/issues/16319/comments | 5 | 2024-01-20T19:23:01Z | 2024-05-01T16:06:34Z | https://github.com/langchain-ai/langchain/issues/16319 | 2,092,211,785 | 16,319 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
In the LangChain documentation about SQL use cases, there seems to be an error with the import statement. The class 'create_retriever_tool' does not exist as specified. According to my understanding, the proper usage might require specifying extra_tools as a parameter when calling create_sql_agent, which should accept a sequence of BaseTool objects.
<img width="1073" alt="20240121012842" src="https://github.com/langchain-ai/langchain/assets/43747516/bbfbe969-1a17-4822-a80c-514d44fb7fef">
### Idea or request for content:
_No response_ | DOC: cannot import name 'create_retriever_tool' from 'langchain_community.agent_toolkits' | https://api.github.com/repos/langchain-ai/langchain/issues/16317/comments | 4 | 2024-01-20T17:29:24Z | 2024-04-30T16:13:16Z | https://github.com/langchain-ai/langchain/issues/16317 | 2,092,170,271 | 16,317 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
Here is my issue in brief '''
``` Python
import os
poppler_path = 'C:\\Users\\Mohd Kaif\\Downloads\\poppler-23.08.0\\Library\\bin'
os.environ["PATH"] += os.pathsep + poppler_path
```
``` Python
directory = '/content/drive/MyDrive/History_QA_dataset'
```
``` Python
from pathlib import Path
def load_files(directory):
documents = list(Path(directory).iterdir())
return documents
documents = load_files(directory)
print(len(documents))
```
``` Python
documents
```
``` Python
from langchain_community.document_loaders import UnstructuredPDFLoader
loader = UnstructuredPDFLoader("/content/drive/MyDrive/History_QA_dataset/ncert_s_modern_india_bipan_chandra_old_edition-1566975158976.pdf")
pages = loader.load()
```
### Description
Rasing this Type Error:
``` Python
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/pdf2image/pdf2image.py](https://localhost:8080/#) in pdfinfo_from_path(pdf_path, userpw, ownerpw, poppler_path, rawdates, timeout, first_page, last_page)
580 env["LD_LIBRARY_PATH"] = poppler_path + ":" + env.get("LD_LIBRARY_PATH", "")
--> 581 proc = Popen(command, env=env, stdout=PIPE, stderr=PIPE)
582
14 frames
FileNotFoundError: [Errno 2] No such file or directory: 'pdfinfo'
During handling of the above exception, another exception occurred:
PDFInfoNotInstalledError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/pdf2image/pdf2image.py](https://localhost:8080/#) in pdfinfo_from_path(pdf_path, userpw, ownerpw, poppler_path, rawdates, timeout, first_page, last_page)
605
606 except OSError:
--> 607 raise PDFInfoNotInstalledError(
608 "Unable to get page count. Is poppler installed and in PATH?"
609 )
PDFInfoNotInstalledError: Unable to get page count. Is poppler installed and in PATH?
```
### System Info
System Information:
Python version:` 3.10.10`
Operating System: `Windows 11`
Windows: 11
pip == 23.3.1
python == 3.10.10
long-chain == 0.1.0
transformers == 4.36.2
sentence_transformers == 2.2.2
unstructured == 0.12.0
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | PDFInfoNotInstalledError: Unable to get page count. Is poppler installed and in PATH? | https://api.github.com/repos/langchain-ai/langchain/issues/16315/comments | 1 | 2024-01-20T15:11:07Z | 2024-04-27T16:24:14Z | https://github.com/langchain-ai/langchain/issues/16315 | 2,092,088,912 | 16,315 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
from langchain_openai.chat_models import ChatOpenAI
chat = ChatOpenAI()
### Description
I am working on Windows 11 with Python 3.11. I am using Pycharm and I have installed langchain_openai ==0.3.
When I initialize chat = ChatOpenAI. I get the following error:
Traceback (most recent call last):
File "C:\workingfolder\PythonProjects\agents\main.py", line 13, in <module>
chat = ChatOpenAI()
^^^^^^^^^^^^
File "C:\Users\rnema\.virtualenvs\agents-ULuCqbe2\Lib\site-packages\langchain_core\load\serializable.py", line 107, in __init__
super().__init__(**kwargs)
File "C:\Users\rnema\.virtualenvs\agents-ULuCqbe2\Lib\site-packages\pydantic\v1\main.py", line 339, in __init__
values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rnema\.virtualenvs\agents-ULuCqbe2\Lib\site-packages\pydantic\v1\main.py", line 1102, in validate_model
values = validator(cls_, values)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rnema\.virtualenvs\agents-ULuCqbe2\Lib\site-packages\langchain_openai\chat_models\base.py", line 345, in validate_environment
values["client"] = openai.OpenAI(**client_params).chat.completions
^^^^^^^^^^^^^
AttributeError: module 'openai' has no attribute 'OpenAI'
### System Info
aiohttp==3.9.1
aiosignal==1.3.1
annotated-types==0.6.0
anyio==4.2.0
asgiref==3.7.2
attrs==23.2.0
backoff==2.2.1
bcrypt==4.1.2
build==1.0.3
cachetools==5.3.2
certifi==2023.11.17
charset-normalizer==3.3.2
chroma-hnswlib==0.7.3
chromadb==0.4.22
click==8.1.7
colorama==0.4.6
coloredlogs==15.0.1
dataclasses-json==0.6.3
Deprecated==1.2.14
distro==1.9.0
fastapi==0.109.0
filelock==3.13.1
flatbuffers==23.5.26
frozenlist==1.4.1
fsspec==2023.12.2
google-auth==2.26.2
googleapis-common-protos==1.62.0
greenlet==3.0.3
grpcio==1.60.0
h11==0.14.0
httpcore==1.0.2
httptools==0.6.1
httpx==0.26.0
huggingface-hub==0.20.2
humanfriendly==10.0
idna==3.6
importlib-metadata==6.11.0
importlib-resources==6.1.1
jsonpatch==1.33
jsonpointer==2.4
kubernetes==29.0.0
**langchain==0.0.352
langchain-community==0.0.11
langchain-core==0.1.8
langchain-openai==0.0.3
langsmith==0.0.78**
markdown-it-py==3.0.0
marshmallow==3.20.1
mdurl==0.1.2
mmh3==4.1.0
monotonic==1.6
mpmath==1.3.0
multidict==6.0.4
mypy-extensions==1.0.0
numpy==1.26.3
oauthlib==3.2.2
onnxruntime==1.16.3
openai==0.27.8
opentelemetry-api==1.22.0
opentelemetry-exporter-otlp-proto-common==1.22.0
opentelemetry-exporter-otlp-proto-grpc==1.22.0
opentelemetry-instrumentation==0.43b0
opentelemetry-instrumentation-asgi==0.43b0
opentelemetry-instrumentation-fastapi==0.43b0
opentelemetry-proto==1.22.0
opentelemetry-sdk==1.22.0
opentelemetry-semantic-conventions==0.43b0
opentelemetry-util-http==0.43b0
overrides==7.4.0
packaging==23.2
posthog==3.3.1
protobuf==4.25.2
pulsar-client==3.4.0
pyasn1==0.5.1
pyasn1-modules==0.3.0
pyboxen==1.2.0
pydantic==2.5.3
pydantic_core==2.14.6
Pygments==2.17.2
PyPika==0.48.9
pyproject_hooks==1.0.0
pyreadline3==3.4.1
python-dateutil==2.8.2
python-dotenv==1.0.0
PyYAML==6.0.1
regex==2023.12.25
requests==2.31.0
requests-oauthlib==1.3.1
rich==13.7.0
rsa==4.9
six==1.16.0
sniffio==1.3.0
SQLAlchemy==2.0.25
starlette==0.35.1
sympy==1.12
tenacity==8.2.3
tiktoken==0.5.2
tokenizers==0.15.0
tqdm==4.66.1
typer==0.9.0
typing-inspect==0.9.0
typing_extensions==4.9.0
urllib3==2.1.0
uvicorn==0.25.0
watchfiles==0.21.0
websocket-client==1.7.0
websockets==12.0
wrapt==1.16.0
yarl==1.9.4
zipp==3.17.0
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | using ChatOpenAI gives an error AttributeError: module 'openai' has no attribute 'OpenAI' | https://api.github.com/repos/langchain-ai/langchain/issues/16314/comments | 5 | 2024-01-20T12:02:30Z | 2024-05-22T22:08:15Z | https://github.com/langchain-ai/langchain/issues/16314 | 2,092,028,686 | 16,314 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
from langchain.llms import OpenAI
from langchain.agents import initialize_agent
from langchain.agents.agent_toolkits import ZapierToolkit
from langchain.utilities.zapier import ZapierNLAWrapper
import os
from dotenv import load_dotenv
load_dotenv()
os.getenv('ZAPIER_NLA_API_KEY')
os.getenv('OPENAI_API_KEY')
llm = OpenAI(temperature=.3)
zapier = ZapierNLAWrapper()
toolkit = ZapierToolkit.from_zapier_nla_wrapper(zapier)
agent = initialize_agent(toolkit.tools, llm, agent="zero-shot-react-description", verbose=True)
for tool in toolkit.tools:
print(tool.name)
print(tool.description)
print("\n\n")
agent.run('Send an email to [email protected] saying hello from Dr. sss')
### Description
Gmail: Send Email
A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example "get the latest email from my bank" or "send a slack message to the #general channel". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are ['Message_Text', 'Channel'], your instruction should be something like 'send a slack message to the #general channel with the text hello world'. Another example: if the params are ['Calendar', 'Search_Term'], your instruction should be something like 'find the meeting in my personal calendar at 3pm'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say 'not enough information provided in the instruction, missing <param>'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: Gmail: Send Email, and has params: ['Body', 'To', 'Subject', 'Cc']
LinkedIn: Create Share Update
A wrapper around Zapier NLA actions. The input to this tool is a natural language instruction, for example "get the latest email from my bank" or "send a slack message to the #general channel". Each tool will have params associated with it that are specified as a list. You MUST take into account the params when creating the instruction. For example, if the params are ['Message_Text', 'Channel'], your instruction should be something like 'send a slack message to the #general channel with the text hello world'. Another example: if the params are ['Calendar', 'Search_Term'], your instruction should be something like 'find the meeting in my personal calendar at 3pm'. Do not make up params, they will be explicitly specified in the tool description. If you do not have enough information to fill in the params, just say 'not enough information provided in the instruction, missing <param>'. If you get a none or null response, STOP EXECUTION, do not try to another tool!This tool specifically used for: LinkedIn: Create Share Update, and has params: ['Comment', 'Visible_To']
> Entering new AgentExecutor chain...
I need to use the Gmail: Send Email tool to complete this task.
Action: Gmail: Send Email
Action Input: {'Body': 'Hello from Dr. Khala', 'To': '[email protected]', 'Subject': 'Hello', 'Cc': ''}Traceback (most recent call last):
File "/Users/ss/.pyenv/versions/demo311/lib/python3.11/site-packages/langchain/chains/base.py", line 310, in __call__
raise e
File "/Users/ss/.pyenv/versions/demo311/lib/python3.11/site-packages/langchain/chains/base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File "/Users/ss/.pyenv/versions/demo311/lib/python3.11/site-packages/langchain/agents/agent.py", line 1245, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "/Users/ss/.pyenv/versions/demo311/lib/python3.11/site-packages/langchain/agents/agent.py", line 1095, in _take_next_step
observation = tool.run(
^^^^^^^^^
File "/Users/ss/.pyenv/versions/demo311/lib/python3.11/site-packages/langchain/tools/base.py", line 365, in run
raise e
File "/Users/ss/.pyenv/versions/demo311/lib/python3.11/site-packages/langchain/tools/base.py", line 337, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "/Users/ss/.pyenv/versions/demo311/lib/python3.11/site-packages/langchain/tools/zapier/tool.py", line 143, in _run
warn_deprecated(
File "/Users/ss/.pyenv/versions/demo311/lib/python3.11/site-packages/langchain/_api/deprecation.py", line 295, in warn_deprecated
raise NotImplementedError(
NotImplementedError: Need to determine which default deprecation schedule to use. within ?? minor releases
### System Info
Macbook pro
File "/Users/ss/.pyenv/versions/demo311/lib/python3.11/site-packages/langchain/tools/base.py", line 365, in run
raise e
File "/Users/ss/.pyenv/versions/demo311/lib/python3.11/site-packages/langchain/tools/base.py", line 337, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "/Users/ss/.pyenv/versions/demo311/lib/python3.11/site-packages/langchain/tools/zapier/tool.py", line 143, in _run
warn_deprecated(
File "/Users/ss/.pyenv/versions/demo311/lib/python3.11/site-packages/langchain/_api/deprecation.py", line 295, in warn_deprecated
raise NotImplementedError(
NotImplementedError: Need to determine which default deprecation schedule to use. within ?? minor releases
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | NotImplementedError: Need to determine which default deprecation schedule to use. within ?? minor releases | https://api.github.com/repos/langchain-ai/langchain/issues/16312/comments | 6 | 2024-01-20T10:59:07Z | 2024-07-01T08:01:13Z | https://github.com/langchain-ai/langchain/issues/16312 | 2,092,010,443 | 16,312 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
is there any way to have my open ai tool parameters as mandatory and optional for a dynamic structured tool
```javascript
const structuredTool2 = new DynamicStructuredTool({
name: "get_order_update",
description: "Get the update on a given order id and otp and return the update. Both otp and order id are required location is optional",
schema: z.object({
order_id: z.string().describe("The order id of the order to be tracked"),
otp: z.string().describe("The otp of the order to be tracked"),
location: z.optional(z.string().describe("The location of the order to be tracked")),
}),
func: ({order_id,otp,location}) => {
try{
if(otp != "1234"){
return "Invalid otp";
}
else{
let message = `Your order id ${order_id} is on the way. It will be delivered by 5pm today.`;
if(location !=undefined){
message = message + `Your order is currently at ${location}`;
}
return message;
}
}
catch(e){
return "SYstems are busy at th emoment please try again later";
}
},
});
```
### Idea or request for content:
_No response_ | Improved docs on creating custom tools for an agent | https://api.github.com/repos/langchain-ai/langchain/issues/16310/comments | 1 | 2024-01-20T09:46:10Z | 2024-04-27T16:24:39Z | https://github.com/langchain-ai/langchain/issues/16310 | 2,091,988,971 | 16,310 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
While initializing GooglePalm LLM encountering the below Not Implemented Error.
Using Langchain 0.1.1
### Description

### System Info
Name: langchain
Version: 0.1.1
Summary: Building applications with LLMs through composability
Home-page: https://github.com/langchain-ai/langchain
Author:
Author-email:
License: MIT
Requires: aiohttp, dataclasses-json, jsonpatch, langchain-community, langchain-core, langsmith, numpy, pydantic, PyYAML, requests, SQLAlchemy, tenacity
Required-by: kor, langchain-experimental
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | GooglePalm - NotImplementedError: Need to determine which default deprecation schedule to use. within ?? minor releases | https://api.github.com/repos/langchain-ai/langchain/issues/16308/comments | 2 | 2024-01-20T05:42:39Z | 2024-05-03T16:06:05Z | https://github.com/langchain-ai/langchain/issues/16308 | 2,091,911,843 | 16,308 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
All the chat model responses inheriting `BaseModel` are converted into `dict` using `response.dict()`. This throws the warning in the console
```bash
PydanticDeprecatedSince20: The `dict` method is deprecated; use `model_dump` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.5/migration/
response = response.dict()
```
### Description
Need to update the package to migrate to the new pydantic version - https://docs.pydantic.dev/2.0/migration/#changes-to-pydanticbasemodel
### System Info
langchain==0.0.336
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | upgrade to pydantic v2 | https://api.github.com/repos/langchain-ai/langchain/issues/16306/comments | 5 | 2024-01-20T05:00:06Z | 2024-06-11T17:17:54Z | https://github.com/langchain-ai/langchain/issues/16306 | 2,091,895,669 | 16,306 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
###Oracle DB Connection - Enable for Oracle DB
DIALECT = 'oracle'
SQL_DRIVER = 'oracledb'
oracle_conn_str=DIALECT + '+' + SQL_DRIVER + '://' + USER_ID + ':' + PASSWORD +'@' + HOST_NAME + ':' + str(PORT) + '/?service_name=' + SERVICE_NAME
db_engine = create_engine(oracle_conn_str)
db = SQLDatabase(db_engine)
###Agent Code
sql_toolkit = SQLDatabaseToolkit(db=self.db, llm=llm)
sql_toolkit.get_tools()
self.agent = create_sql_agent(
llm=llm,
toolkit=sql_toolkit,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
prefix=zero_prompt,
extra_tools=[llm_tool],
handle_parsing_errors=True,
agent_executor_kwargs= {
"return_intermediate_steps" : True
}
)```
### Description
When using the SQL agent on Azure SQL Database, I provide the Database name in the connection string. This allows the SQL agent to successfully retrieve the table names and their schema.
However, when working with the SQL agent on Oracle Database, I provide the UserID, Password, Server name, Host, and Port in the connection string (provided code in example). The connection is established successfully, but the SQL agent encounters an issue in recognizing the tables. I suspect that the problem may arise from my tables being located under a specific schema (let's call it ABC_SCHEMA).
Upon investigation, it seems that when the SQL agent enters the executor chain with the "**Action: sql_db_list_tables**," it fails to list the tables under ABC_SCHEMA. As a result, an error is generated, specifically: **ValueError: An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass handle_parse LLM output: Observation needed.**
In simple terms, the SQL agent is facing difficulty in identifying tables under a particular schema in Oracle Database, leading to the mentioned error. Can someone please help to fix this issue.
### System Info
langchain==0.0.348
oracledb==2.0.1
SQLAlchemy==2.0.25
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | SQL Agent is unable to recognize the tables and its schema when connected to a Oracle DB. | https://api.github.com/repos/langchain-ai/langchain/issues/16294/comments | 2 | 2024-01-19T20:47:10Z | 2024-04-27T16:33:47Z | https://github.com/langchain-ai/langchain/issues/16294 | 2,091,393,967 | 16,294 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
from langchain.schema import HumanMessage, AIMessage
llm = ChatGoogleGenerativeAI(model="gemini-pro")
llm([
AIMessage(role="model", content="Hi"),
HumanMessage(role="user", content="Tell me a joke")
]) # gives the error
```
### Description
* Trying to use Langchain's Gemini integration to quickly process a chat history that starts with an AIMessage, as given by the example [here](https://github.com/langchain-ai/streamlit-agent/blob/main/streamlit_agent/basic_streaming.py).
* The model should return a response with no errors because the last message I have given as an input has the role of ``user``, but instead it gives the error: "ChatGoogleGenerativeAIError: Invalid argument provided to Gemini: 400 Please ensure that multiturn requests ends with a user role or a function response."
### System Info
langchain==0.1.1
langchain-community==0.0.13
langchain-core==0.1.12
langchain-experimental==0.0.49
langchain-google-genai==0.0.6
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Gemini integration fails when a list of messages starting with AIMessage is given as input | https://api.github.com/repos/langchain-ai/langchain/issues/16288/comments | 6 | 2024-01-19T17:33:40Z | 2024-04-27T16:39:43Z | https://github.com/langchain-ai/langchain/issues/16288 | 2,090,986,643 | 16,288 |
[
"hwchase17",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
At the moment, RunnableEach is missing implementations of streaming methods.
https://github.com/langchain-ai/langchain/blob/e5445c3890cc8a57250817c61de796e1a1dcc521/libs/core/langchain_core/runnables/base.py#L3662-L3662 | Add stream/astream_log/astream_events to RunnableEach implementation | https://api.github.com/repos/langchain-ai/langchain/issues/16281/comments | 2 | 2024-01-19T15:45:18Z | 2024-05-30T16:03:58Z | https://github.com/langchain-ai/langchain/issues/16281 | 2,090,812,547 | 16,281 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
'''
results = bm25_retriever.get_relevant_documents(search_key)
'''
### Description
I am using the BM25 retriever and noticed the following two unexpected behaviors:
1. When I add only two documents, get_relevant_documents() isn't working as expected, but will always return the second (i.e. the last page) - independent of the actual search key.
2. get_relevant_documents() defaults to returning the last page when it finds no suitable match (e.g. when tested with some arbitrary character combinations like 'jgosntih' or with an empty string as search key). Thus, sensible results cannot be discerned from non-sensible results.
Probably, both observations can be explained by the same default behavior of get_relevant_documents(), which just does not make sense and leads to wrong and unuseful results.
### System Info
python = "3.10"
langchain = "0.0.272"
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | BM25 Retriever: get_relevant_documents() returns wrong results | https://api.github.com/repos/langchain-ai/langchain/issues/16274/comments | 1 | 2024-01-19T14:23:41Z | 2024-04-26T16:23:27Z | https://github.com/langchain-ai/langchain/issues/16274 | 2,090,674,295 | 16,274 |
[
"hwchase17",
"langchain"
] | ### Feature request
Explore these Neo4j resources for enhanced insights:
[Indexes for Vector Search](https://neo4j.com/docs/cypher-manual/current/indexes-for-vector-search/)
[Query Tuning and Indexes](https://neo4j.com/docs/cypher-manual/current/planning-and-tuning/query-tuning/indexes/)
[Neo4j LangChain Vector Index Implementation](https://neo4j.com/developer-blog/neo4j-langchain-vector-index-implementation/)
Hope you find them helpful!
### Motivation
I am always frustrated when
``` Python
from langchain.graphs import Neo4jGraph
graph = Neo4jGraph(
url=os.environ["NEO4J_URI"],
username=os.environ["NEO4J_USERNAME"],
password=os.environ["NEO4J_PASSWORD"]
)
```
``` Python
vector_index = Neo4jVector.from_existing_graph(
HuggingFaceBgeEmbeddings(),
url=os.environ["NEO4J_URI"],
username=os.environ["NEO4J_USERNAME"],
password=os.environ["NEO4J_PASSWORD"],
index_name='New_index',
node_label="Embeddable",
text_node_properties=['definition', 'term', 'clause'],
embedding_node_property='embedding',
)
```
The above code results in a TypeError:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-59-e24655343023>](https://localhost:8080/#) in <cell line: 1>()
----> 1 vector_index = Neo4jVector.from_existing_graph(
2 HuggingFaceBgeEmbeddings(),
3 url=os.environ["NEO4J_URI"],
4 username=os.environ["NEO4J_USERNAME"],
5 password=os.environ["NEO4J_PASSWORD"],
[/usr/local/lib/python3.10/dist-packages/langchain_community/vectorstores/neo4j_vector.py](https://localhost:8080/#) in from_existing_graph(cls, embedding, node_label, embedding_node_property, text_node_properties, keyword_index_name, index_name, search_type, retrieval_query, **kwargs)
877 + "} AS metadata, score"
878 )
--> 879 store = cls(
880 embedding=embedding,
881 index_name=index_name,
TypeError: Neo4jVector.__init__() got an unexpected keyword argument 'embedding_dimension'
```
### Your contribution
Eager to contribute! Ready to submit a pull request for this feature enhancement. Will ensure alignment with the project's guidelines by thoroughly reading the Contributing Guide. Open to guidance and collaboration.
Excited to contribute! | Feature Request - Explicit Creation of Vector Index with Custom Embedding Configuration in Neo4j Graph DB | https://api.github.com/repos/langchain-ai/langchain/issues/16272/comments | 1 | 2024-01-19T13:38:40Z | 2024-04-26T16:36:21Z | https://github.com/langchain-ai/langchain/issues/16272 | 2,090,595,505 | 16,272 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
from langchain.chains.question_answering import load_qa_chain
from langchain.llms import OpenAI
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.chains.loading import load_chain_from_config
import os
import json
os.environ['OPENAI_API_KEY']="XXX"
# config details in Description below
with open('config.json', 'r') as reader:
config = json.load(reader)
loader = TextLoader("XXX.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=200, chunk_overlap=0, separator='')
texts = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_documents(texts, embeddings)
retriever = docsearch.as_retriever()
chain = load_chain_from_config(config, retriever=docsearch.as_retriever())
print (chain.dict()) # compared with config, missing prompt with input_variable 'question'
```
### Description
Bug in loading config
when there is a prompt with _type 'chat' in the config, and the length of 'messages' >1, some input variables will be missed.
caused by this code:
```python
def _load_chat_prompt(config: Dict) -> ChatPromptTemplate:
"""Load chat prompt from config"""
messages = config.pop("messages")
template = messages[0]["prompt"].pop("template") if messages else None
config.pop("input_variables")
if not template:
raise ValueError("Can't load chat prompt without template")
return ChatPromptTemplate.from_template(template=template, **config)
```
config example:
```json
{
"_type":"retrieval_qa",
"combine_documents_chain":{
"_type":"stuff_documents_chain",
"document_prompt":{
"_type":"prompt",
"input_types":{},
"input_variables":[
"page_content"
],
"output_parser":null,
"partial_variables":{},
"template":"{page_content}",
"template_format":"f-string",
"validate_template":false
},
"document_separator":"\n\n",
"document_variable_name":"context",
"input_key":"input_documents",
"llm_chain":{
"_type":"llm_chain",
"llm":{
"_type":"openai",
"frequency_penalty":0,
"logit_bias":{},
"max_tokens":256,
"model_name":"gpt-3.5-turbo-instruct",
"n":1,
"presence_penalty":0,
"temperature":0.7,
"top_p":1
},
"llm_kwargs":{},
"memory":null,
"metadata":null,
"output_key":"text",
"output_parser":{
"_type":"default"
},
"prompt":{
"input_variables": [
"context",
"question"
],
"input_types": {},
"output_parser": null,
"partial_variables": {},
"messages": [
{
"prompt": {
"input_variables": [
"context"
],
"input_types": {},
"output_parser": null,
"partial_variables": {},
"template": "Use the following pieces of context to answer the user's question. \nIf you don't know the answer, just say that you don't know, don't try to make up an answer.\n----------------\n{context}",
"template_format": "f-string",
"validate_template": false,
"_type": "prompt"
},
"additional_kwargs": {}
},
{
"prompt": {
"input_variables": [
"question"
],
"input_types": {},
"output_parser": null,
"partial_variables": {},
"template": "{question}",
"template_format": "f-string",
"validate_template": false,
"_type": "prompt"
},
"additional_kwargs": {}
}
],
"validate_template": false,
"_type": "chat"
},
"return_final_only":true,
"tags":null,
"verbose":false
},
"memory":null,
"metadata":null,
"output_key":"output_text",
"tags":null,
"verbose":false
},
"input_key":"query",
"memory":null,
"metadata":null,
"output_key":"result",
"return_source_documents":false,
"tags":null,
"verbose":false
}
```
### System Info
langchain: 0.0.352
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Missing messages in _load_chat_prompt function.Only the first message in the prompt is loaded. | https://api.github.com/repos/langchain-ai/langchain/issues/16271/comments | 1 | 2024-01-19T12:50:24Z | 2024-04-26T16:25:43Z | https://github.com/langchain-ai/langchain/issues/16271 | 2,090,489,647 | 16,271 |
[
"hwchase17",
"langchain"
] | i am also facing same issue. Now since openai has updated it's API hence i need to use openai==1.7.1 and langchain ==0.1.0.
because of this not able to pickle to local.

Do we have issue with store = FAISS.from_texts(docs, embeddings, metadatas=metadatas) also?
because when trying to directly access store by using chain = RetrievalQAWithSourcesChain.from_chain_type(llm=llm, retriever=store.as_retriever())
then it is giving answer in 5 to 8 words. why? And this is happening in latest version only.
any idea . | RetrievalQA and AzureOpenAIEmbeddings lead to TypeError: cannot pickle '_thread.lock' object | https://api.github.com/repos/langchain-ai/langchain/issues/16267/comments | 2 | 2024-01-19T11:24:19Z | 2024-07-07T16:07:06Z | https://github.com/langchain-ai/langchain/issues/16267 | 2,090,325,924 | 16,267 |
[
"hwchase17",
"langchain"
] | 
I am using langchain==0.1.1 and openai==1.8.0 and using all latest module of langchain. | Retrieval chain is giving response in some words only why? | https://api.github.com/repos/langchain-ai/langchain/issues/16266/comments | 1 | 2024-01-19T11:19:33Z | 2024-04-26T16:34:58Z | https://github.com/langchain-ai/langchain/issues/16266 | 2,090,315,840 | 16,266 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
There seems to be no documentation for the AgentExecutor.invoke() method, where I would expect to see the API call code. This is of crucial importance from a compliance perspective. Could you please provide documentation for API calls that reside inside your wrappers like langchain_openai.AzureChatOpenAI()?
### Idea or request for content:
_No response_ | DOC: No documentation for OpenAI or AzureOpenAI API calls | https://api.github.com/repos/langchain-ai/langchain/issues/16265/comments | 1 | 2024-01-19T11:12:47Z | 2024-04-30T16:23:07Z | https://github.com/langchain-ai/langchain/issues/16265 | 2,090,301,728 | 16,265 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
The following code:
```python
agent_system = create_openai_tools_agent(
llm=llm,
tools=tools,
prompt=prompt)
agent = AgentExecutor(
agent=agent_system, # Sets the system to use openai functions
tools=tools, # Sets the tools visible to the LLM
return_intermediate_steps=True, # Get a list of traces of the trajectory of the chain
max_iterations=4, # Sets the number of intermediate steps
early_stopping_method="generate", # Applies final pass to generate an output if max iterations is reached
verbose=verbose) # Makes internal states visible to debug
agent.invoke({'input': user_input})
```
### Description
This code if it reaches the max iterations, returns an error.
ERROR: Got unsupported early_stopping_method `generate`
### System Info
pip langchain==0.1.0 langchain-openai==0.0.2.post1
Using as LLM GPT-3.5 Turbo 0613
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Unsupported early_stopping_method="generate" in AgentExecutor after reaching iteration limit | https://api.github.com/repos/langchain-ai/langchain/issues/16263/comments | 8 | 2024-01-19T10:01:50Z | 2024-06-17T15:22:17Z | https://github.com/langchain-ai/langchain/issues/16263 | 2,090,142,455 | 16,263 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
My code looks as follows
```
### Example Code
from datetime import datetime
from langchain_openai import OpenAIEmbeddings
from langchain_openai import OpenAI
from langchain.memory import VectorStoreRetrieverMemory
from langchain.chains import ConversationChain
from langchain.prompts import PromptTemplate
import faiss
from langchain.docstore import InMemoryDocstore
from langchain_community.vectorstores import FAISS
embedding_size = 1536 # Dimensions of the OpenAIEmbeddings
index = faiss.IndexFlatL2(embedding_size)
embedding_fn = OpenAIEmbeddings(openai_api_key=os.getenv("OPENAI_API_KEY")).embed_query
vectorstore = FAISS(embedding_fn, index, InMemoryDocstore({}), {})
# In actual usage, you would set `k` to be a higher value, but we use k=1 to show that
# the vector lookup still returns the semantically relevant information
retriever = vectorstore.as_retriever(search_kwargs=dict(k=1))
memory = VectorStoreRetrieverMemory(retriever=retriever)
print(memory)
# When added to an agent, the memory object can save pertinent information from conversations or used tools
memory.save_context({"input": "My favorite food is pizza"}, {"output": "that's good to know"})
memory.save_context({"input": "My favorite sport is soccer"}, {"output": "..."})
memory.save_context({"input": "I don't the Celtics"}, {"output": "ok"}) #
```
**Running this results in the following error**:
```
`embedding_function` is expected to be an Embeddings object, support for passing in a function will soon be removed.
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[54], line 23
20 # In actual usage, you would set `k` to be a higher value, but we use k=1 to show that
21 # the vector lookup still returns the semantically relevant information
22 retriever = vectorstore.as_retriever(search_kwargs=dict(k=1))
---> 23 memory = VectorStoreRetrieverMemory(retriever=retriever)
24 print(memory)
25 # When added to an agent, the memory object can save pertinent information from conversations or used tools
File ~\anaconda3\envs\python310\lib\site-packages\langchain\load\serializable.py:97, in Serializable.__init__(self, **kwargs)
96 def __init__(self, **kwargs: Any) -> None:
---> 97 super().__init__(**kwargs)
98 self._lc_kwargs = kwargs
File ~\anaconda3\envs\python310\lib\site-packages\pydantic\v1\main.py:341, in BaseModel.__init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for VectorStoreRetrieverMemory
retriever -> vectorstore
instance of VectorStore expected (type=type_error.arbitrary_type; expected_arbitrary_type=VectorStore)
```
### Description
I'm trying to follow the example [[here ] (https://python.langchain.com/docs/modules/memory/types/vectorstore_retriever_memory)] (https://python.langchain.com/docs/modules/memory/types/vectorstore_retriever_memory) In a jupyter notebook.
The code is an exact copy from the example but when I run the code I get the validation error.
Any help much appreciated!
Thanks
### System Info
Python: 3.10
conda 23.10.0
Langchain Version: 0.0.340
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Trying to create VectorStoreRetrieverMemory causes an error | https://api.github.com/repos/langchain-ai/langchain/issues/16261/comments | 3 | 2024-01-19T09:39:30Z | 2024-06-01T00:19:28Z | https://github.com/langchain-ai/langchain/issues/16261 | 2,090,092,244 | 16,261 |
[
"hwchase17",
"langchain"
] | I have created a RetrievalQA Chain, but facing an issue. When calling the Chain, I get the following error: `ValueError: Missing some input keys: {'query', 'typescript_string'} `
My code looks as follows:
```
from langchain_community.embeddings import HuggingFaceEmbeddings
embeddings = HuggingFaceEmbeddings(model_name="intfloat/multilingual-e5-large",
model_kwargs={'device': 'mps'}, encode_kwargs={'device': 'mps', 'batch_size': 32})
vectorstore = FAISS.load_local("vectorstore/db_faiss_bkb", embeddings)
retriever = vectorstore.as_retriever(search_kwargs={'k': 1, 'score_treshold': 0.75}, search_type="similarity")
llm = build_llm("modelle/mixtral-8x7b-instruct-v0.1.Q5_K_M.gguf")
def build_retrieval_qa(llm, prompt, vectordb):
chain_type_kwargs={ "prompt": prompt,
"verbose": False
}
dbqa = RetrievalQA.from_chain_type(llm=llm,
chain_type='stuff',
retriever=vectordb,
return_source_documents=True,
chain_type_kwargs=chain_type_kwargs,
verbose=True
)
return dbqa
prompt = """
<s> [INST] You are getting a task and a User input. If there is relevant information in the context, please add this information to your answer.
### Here the Task: ###
{typescript_string}
### Here the context: ###
{context}
### Here the User Input: ###
{query}
Answer: [/INST]
"""
prompt_temp = PromptTemplate(template=prompt, input_variables=["typescript_string", "context", "query"])
dbqa1 = build_retrieval_qa(llm=llm,prompt=prompt_temp,vectordb=retriever)
question = "What is IGB?"
types = "Answer shortly!"
dbqa1({"query": question, "typescript_string": types})
```
With this code the error from above occurs in the last line here.
The weird thing is, that it is working with a LLM-Chain from Langchain without Retrieval:
```
from langchain.chains import LLMChain
llm_chain = LLMChain(
llm=llm,
prompt= prompt_temp,
verbose=True,
)
test = llm_chain({"type_string": types, "input": question})
test
```
This works and I am getting a correct response. I am using
`Langchain == 0.1.0`
So is there something wrong with my PromptTemplate or is this a bug? | RetrievalQA Prompt Error: Missing some input keys | https://api.github.com/repos/langchain-ai/langchain/issues/16260/comments | 1 | 2024-01-19T09:18:58Z | 2024-04-26T16:32:23Z | https://github.com/langchain-ai/langchain/issues/16260 | 2,090,048,016 | 16,260 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
persist_directory = './chroma.db'
chroma_setting = Settings(anonymized_telemetry=False,persist_directory=persist_directory)
# model_name = "intfloat/multilingual-e5-base"
model_name = "shibing624/text2vec-base-chinese"
model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': True}
embed_impl = HuggingFaceEmbeddings(model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs)
def generate_text_embedding():
loader = CSVLoader("./drug.csv")
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
split_docs = text_splitter.split_documents(documents)
vectorstore = Chroma.from_documents(documents=split_docs, persist_directory=persist_directory, embedding=embed_impl, client_settings=chroma_setting)
### Description
When employing Chroma VectorStore, the specified configuration of `chroma_setting=Settings(anonymized_telemetry=False)` does not result in the desired functionality of `persist_directory`.
---
I discovered in the source code that when configuring the `chroma_setting` parameter, solely specifying the `persist_directory` parameter does not suffice for vector persistence. It is imperative to additionally set `is_persistent=True` to ensure proper vector storage.
```python
chroma_setting = Settings(anonymized_telemetry=False, is_persistent=True, persist_directory=persist_directory)
```
### System Info
Python 3.10.13
langchain 0.1.0
langchain-community 0.0.13
langchain-core 0.1.10
langchain-openai 0.0.2.post1
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | When configuring with `chroma_setting=Settings(anonymized_telemetry=False)`, the `Chroma.from_documents(persist_directory="./chroma.db")` parameter is not taking effect. | https://api.github.com/repos/langchain-ai/langchain/issues/16259/comments | 1 | 2024-01-19T08:17:25Z | 2024-04-26T16:27:52Z | https://github.com/langchain-ai/langchain/issues/16259 | 2,089,917,914 | 16,259 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
from langchain.chains.summarize import load_summarize_chain
from langchain.schema import Document
from langchain.chat_models import ChatOpenAI
documents = [
Document(page_content="LangChain is a framework for developing applications powered by language models"),
Document(page_content="LangChain provides the Python and JavaScript libraries."),
Document(page_content="LangChain contains interfaces and integrations for a myriad of components.")
]
llm = ChatOpenAI(...)
chain = load_summarize_chain(
llm=llm,
chain_type="map_reduce",
token_max=30,
)
chain.run(input_documents=documents)
```
### Description
During map reduce's reduce step, when it tries to collapse documents into one, depending on `token_max` user provides, it will hit the infinite loop since it can never collapse to fit `token_max`. This `token_max` is valid since it is bigger than individual document tokens, but small to collapse documents to fit in. This will also cause continuous run collapse chain (or LLM calls).
### System Info
langchain 0.1.1
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | load_summarize_chain hits infinite loop due to small token_max | https://api.github.com/repos/langchain-ai/langchain/issues/16251/comments | 1 | 2024-01-19T04:26:13Z | 2024-02-15T16:35:45Z | https://github.com/langchain-ai/langchain/issues/16251 | 2,089,559,781 | 16,251 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
Is there a way to add multiple tables from different databases and schema using SQLDatabase?
For example: DB-1 has these tables- "city_stats", "country_stats"
Presently these tables are from 1 database = SQLDatabase(engine, include_tables=["city_stats", "country_stats"])
if we have 2 different databases DB-1(tables- "city_stats", "country_stats") and DB-2(tables- "company_stats", "team_stats") , how to include tables from 2 different databases?
### Description
Is there a way to add multiple tables from different databases and schema using SQLDatabase?
For example: DB-1 has these tables- "city_stats", "country_stats"
Presently these tables are from 1 database = SQLDatabase(engine, include_tables=["city_stats", "country_stats"])
if we have 2 different databases DB-1(tables- "city_stats", "country_stats") and DB-2(tables- "company_stats", "team_stats") , how to include tables from 2 different databases?
### System Info
langchain Version: 0.0.348
python Version: Python 3.9.18
OS: Mac OS M2 (Ventura 13.6.2)
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async | How to add multiple tables from two different databases and schema using SQLDatabase? | https://api.github.com/repos/langchain-ai/langchain/issues/16249/comments | 5 | 2024-01-19T04:19:12Z | 2024-05-17T01:39:04Z | https://github.com/langchain-ai/langchain/issues/16249 | 2,089,552,752 | 16,249 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
import os
import qdrant_client
from dotenv import load_dotenv
from langchain.chains import RetrievalQA
from langchain.embeddings.huggingface import HuggingFaceInstructEmbeddings
from langchain.llms import OpenAI
from langchain.vectorstores import Qdrant
from langchain_core.documents import Document
from langchain_core.prompts import PromptTemplate
os.environ['OPENAI_API_KEY'] = "key"
document_prompt = PromptTemplate(
input_variables=["page_content"], template="Context:\n{page_content}"
)
prompt = PromptTemplate(
input_variables=["context", "question"], template="""你很喜歡在回話的結尾加入語助詞「超派」.
{context}
Question: {question}
Answer:
""""")
def get_vector_store():
client = qdrant_client.QdrantClient(
os.getenv('QDRANT_HOST'),
)
embeddings = HuggingFaceInstructEmbeddings(model_name="all-MiniLM-L6-v2")
vector_store = Qdrant(
client=client,
collection_name=os.getenv('QDRANT_COLLECTION_NAME'),
embeddings=embeddings,
)
return vector_store
def main():
load_dotenv()
vectorstore = get_vector_store()
documents = []
qa = RetrievalQA.from_chain_type(
llm=OpenAI(
temperature=0.7,
max_tokens=100,
prompt=prompt
),
chain_type="stuff",
retriever=vectorstore.as_retriever(search_type="similarity_score_threshold",
search_kwargs={"score_threshold": 0.3, "k": 1000},
collection_name=os.getenv('QDRANT_COLLECTION_NAME'),
),
)
while True:
question = input("冒險者:")
document = Document(page_content=question, metadata={'source': 'user'})
documents.append(document)
answer = qa.invoke(question)
print(f"Ai: {answer}")
vectorstore.add_documents([document])
if question == "bye":
break
if __name__ == "__main__":
main()
Why can't I add a prompt for personality anymore? The error message is as follows:
UserWarning: WARNING! prompt is not default parameter. prompt was transferred to model_kwargs. Please confirm that prompt is what you intended.TypeError: langchain_community.llms.openai.completion_with_retry() got multiple values for keyword argument 'prompt
### Idea or request for content:
Why can't I add a prompt for personality anymore? The error message is as follows:
UserWarning: WARNING! prompt is not default parameter. prompt was transferred to model_kwargs. Please confirm that prompt is what you intended.TypeError: langchain_community.llms.openai.completion_with_retry() got multiple values for keyword argument 'prompt | Why can't I add a prompt for personality anymore? | https://api.github.com/repos/langchain-ai/langchain/issues/16247/comments | 1 | 2024-01-19T02:18:54Z | 2024-04-26T16:25:38Z | https://github.com/langchain-ai/langchain/issues/16247 | 2,089,420,863 | 16,247 |
[
"hwchase17",
"langchain"
] | ### Feature request
HI! Whether the current langchain framework can be implemented on mobile, because considering the current computing power of the server is not so sufficient.We can use the computing power of our phones as well.
In addition, the deployment on the mobile phone will pay more attention to the user's privacy, and it is a good choice in some cases where the network connection is not possible.
### Motivation
As far as I know, there is already the ability to deploy LLM to mobile.eg:[MLC-AI](https://github.com/mlc-ai/mlc-llm)
### Your contribution
It is completely possible to consider deploying lightweight small databases and lightweight small LLM models (eg: [MiniChat](https://huggingface.co/GeneZC/MiniChat-2-3B),[Qwen](https://huggingface.co/Qwen/Qwen-1_8B-Chat)) on mobile phones and use them.
Thanks | Implementation of langchain based on mobile terminal | https://api.github.com/repos/langchain-ai/langchain/issues/16245/comments | 2 | 2024-01-19T01:22:51Z | 2024-04-07T02:19:05Z | https://github.com/langchain-ai/langchain/issues/16245 | 2,089,379,273 | 16,245 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
Setting up our agent with this memory/prompt:
```
chat_history = ChatMessageHistory()
MEMORY_KEY = "chat_history"
memory = ConversationBufferMemory(memory_key=MEMORY_KEY, return_messages=True, chat_memory=chat_history)
prompt = OpenAIFunctionsAgent.create_prompt(
system_message=system_message,
extra_prompt_messages=[MessagesPlaceholder(variable_name=MEMORY_KEY)],
)
```
and using `agent_executor.astream_log` to stream (instead of .invoke), I get this error:
`One output key expected, got dict_keys(['output', 'messages'])`
### Description
get this error: `One output key expected, got dict_keys(['output', 'messages'])`
### System Info
langchain 0.1.0
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | One output key expected when using astream_log and ConversationBufferMemory | https://api.github.com/repos/langchain-ai/langchain/issues/16242/comments | 2 | 2024-01-19T00:28:12Z | 2024-01-20T22:53:28Z | https://github.com/langchain-ai/langchain/issues/16242 | 2,089,337,165 | 16,242 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
The URL that is linked on these pages:
https://blog.langchain.dev/automating-web-research/
https://python.langchain.com/docs/use_cases/web_scraping#research-automation
is
https://python.langchain.com/docs/modules/data_connection/retrievers/web_research
but that URL returns "Page not found"
### Idea or request for content:
_No response_ | DOC: web_research docs showing "Page not found" | https://api.github.com/repos/langchain-ai/langchain/issues/16241/comments | 1 | 2024-01-19T00:01:55Z | 2024-04-26T16:25:35Z | https://github.com/langchain-ai/langchain/issues/16241 | 2,089,311,393 | 16,241 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
Here is a simple implementation running meta's llama2 in langchain:
```
from langchain_community.llms import HuggingFaceHub
llm = HuggingFaceHub(
repo_id="meta-llama/Llama-2-7b-chat-hf",
huggingfacehub_api_token=token,
)
print(llm.invoke("Who are you?"))
```
The above code produces the following output:
```
son who is passionate about helping others. I have a strong desire to make a positive impact....
```
It appears that the beginning of the generated text is cut off. However, it _should_ produce identical output as what you would get from using huggingface directly without langchain:
```
from huggingface_hub import InferenceApi
repo_id="meta-llama/Llama-2-7b-chat-hf"
client = InferenceApi(
repo_id=repo_id,
token=token,
task='text-generation')
output = client("Who are you?")
print(output[0]["generated_text"])
```
Here the output is not cutoff:
```
I am a person who is passionate about helping others. I have a strong desire to make a positive impact...
```
### Description
Some text-generation models on huggingface repeat the prompt in their generated response, but not all do! If you are using a model like "gpt2" which DOES repeat the prompt langchain will properly remove the first few characters of the response (to match the len(prompt)). However, if you are using a model (such as the very popular "meta-llama/Llama-2-7b-chat-hf") that DOES NOT repeat the prompt in its generated text, then the beginning of the generated text will be cut off.
### System Info
langchain==0.1.1
langchain-community==0.0.13
langchain-core==0.1.10
langchain-openai==0.0.2.post1
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | HuggingFaceHub removes leading characters of text-generation | https://api.github.com/repos/langchain-ai/langchain/issues/16232/comments | 1 | 2024-01-18T21:11:10Z | 2024-01-19T14:10:48Z | https://github.com/langchain-ai/langchain/issues/16232 | 2,089,055,329 | 16,232 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
When I provide and prompt like this :
``` python prompt_profile_declatation = PromptTemplate(
input_variables = ['question_answers', 'json_structure'],
template =" ####INSTRCUTION: Here is a questions/answer of Profile. \n \n {question_answers} \n \n Profile is linked to an company directly or through many companies. Based on Profile's declaration, determine: how he is related to the companies. Fill the structure below ONLY with Profile informations above. DO NOT FROGET ANY KEYS \n \n {json_structure} \n \n If instruction are given in structure, follow them. DO NOT WRITE COMMENTS AT ALL, ONLY RETURN JSON format. "
)```
### Description
When I provide and prompt like this :
prompt_profile_declatation = PromptTemplate(
input_variables = ['question_answers', 'json_structure'],
template =" ####INSTRCUTION: Here is a questions/answer of Profile. \n \n {question_answers} \n \n Profile is linked to an company directly or through many companies. Based on Profile's declaration, determine: how he is related to the companies. Fill the structure below ONLY with Profile informations above. DO NOT FROGET ANY KEYS \n \n {json_structure} \n \n If instruction are given in structure, follow them. DO NOT WRITE COMMENTS AT ALL, ONLY RETURN JSON format. "
)
It forgets more than half to the keys and provide very inaccurate results even with gpt-4. However when I user the classic ChatGPT Api it works perfectly well. If I user the ChatOllama from langchain it also works as expected.
### System Info
langchain==0.1.0
langchain-community==0.0.10
langchain-core==0.1.8
langchain-mistralai==0.0.2.post1
langchain-openai==0.0.2.post1
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Appalling performances when using ChatOpenAI and ChatMistralAI | https://api.github.com/repos/langchain-ai/langchain/issues/16228/comments | 1 | 2024-01-18T20:33:35Z | 2024-01-18T21:04:45Z | https://github.com/langchain-ai/langchain/issues/16228 | 2,089,008,469 | 16,228 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
from langchain_community.llms.sagemaker_endpoint import SagemakerEndpoint
sm_llm = SagemakerEndpoint(
endpoint_name="endpoint-name",
client=sagemaker_client,
content_handler=content_handler,
model_kwargs={"max_new_tokens": 100, "top_p": 0.2, "temperature": 0.1},
endpoint_kwargs={"InferenceComponentName": "model-name"},
streaming=True,
)
sm_llm.generate(["<s>[INST] what is the recipe of mayonnaise? [/INST]"])
```
### Description
While using SagemakerEndpoint, if you have `streaming=False`, this code works fine and gives a response. When you have `streaming=True`, SagemakerEndpoint goes into an infinite loop in the `LineIterator` at this [line](https://github.com/langchain-ai/langchain/blob/8840a8cc95179dd945b1f4ab9eb08e34318ed2f4/libs/community/langchain_community/llms/sagemaker_endpoint.py#L66).
### System Info
```
langchain==0.1.0
langchain-community==0.0.11
langchain-core==0.1.8
```
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Streaming with SagemakerEndpoint class doesn't return a response [LineIterator infinite loop] | https://api.github.com/repos/langchain-ai/langchain/issues/16219/comments | 4 | 2024-01-18T18:44:31Z | 2024-07-26T16:05:57Z | https://github.com/langchain-ai/langchain/issues/16219 | 2,088,862,579 | 16,219 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```python
from langchain_core.runnables import RunnableLambda
from itertools import cycle
def success(inputs: str) -> str:
return "success"
iterable = cycle([ ValueError("error"), "hello!"])
def fail(inputs: str) -> None:
"""Simple func."""
s = next(iterable)
if isinstance(s, Exception):
raise s
else:
return s
chain = RunnableLambda(success) | RunnableLambda(fail).with_retry(
stop_after_attempt=2,
)
run_logs = [log async for log in chain.astream_log({"inputs": "hello"})]
assert run_logs == []
```
### Description
The exception bubbles up when using astream_log
### System Info
```(langchain_3_11_4) ➜ langchain git:(master) ✗ python -m "langchain_core.sys_info"
System Information
------------------
> OS: Linux
> OS Version: #14~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Mon Nov 20 18:15:30 UTC 2
> Python Version: 3.11.4 (main, Sep 25 2023, 10:06:23) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.10
> langchain: 0.1.0
> langchain_community: 0.0.11
> langserve: 0.0.19
```
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | astream_log doesn't catch exception correctly even though exception is caught by the retry decorator | https://api.github.com/repos/langchain-ai/langchain/issues/16212/comments | 0 | 2024-01-18T17:20:22Z | 2024-04-25T16:17:15Z | https://github.com/langchain-ai/langchain/issues/16212 | 2,088,736,917 | 16,212 |
[
"hwchase17",
"langchain"
] | ### Feature request
Add support for getting num_tokens() for ChatGoogleGenerativeAI , GoogleGenerativeAI,
### Motivation
This feature has below benefits
- Useful for checking if an input will fit in a model's context window.
- Useful for estimating usage costs
### Your contribution
I'll submit PR | Added Token Count support for Gen AI Models | https://api.github.com/repos/langchain-ai/langchain/issues/16204/comments | 1 | 2024-01-18T14:43:52Z | 2024-04-25T16:23:50Z | https://github.com/langchain-ai/langchain/issues/16204 | 2,088,437,151 | 16,204 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
# First
import openai
import streamlit as st
from langchain_experimental.agents.agent_toolkits import create_csv_agent
from langchain.chat_models import AzureChatOpenAI
from langchain.llms import AzureOpenAI
from langchain.agents.agent_types import AgentType
from langchain.llms import AzureOpenAI
from dotenv import load_dotenv
import os
import openai
import streamlit as st
import warnings
import pandas as pd
warnings.filterwarnings("ignore")
st.sidebar.title("Powered by: GSC Data & Analytics")
st.sidebar.markdown(
"""
Upload your csv files and do basic query over it
"""
)
st.header("Ask your CSV 📈",divider='rainbow')
def _handle_error(error) -> str:
return str(error)[:200]
csv_files = st.file_uploader("Upload a CSV file", type={"csv"},accept_multiple_files=True)
if csv_files is not None:
selected_file = st.selectbox("Select a CSV file", [file.name for file in csv_files])
selected_index = [file.name for file in csv_files].index(selected_file)
data = pd.read_csv(csv_files[selected_index])
data.to_csv(selected_file)
st.dataframe(data,use_container_width=True)
agent = create_csv_agent(
AzureChatOpenAI(openai_api_type="azure"
,openai_api_key= open_api_key
,openai_api_base = "https://openai-nonfunsbx-pprd-01.openai.azure.com"
,openai_api_version="2023-03-15-preview"
,deployment_name="gpt-35-turbo-0301"
),selected_file, verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=_handle_error)
if "messages" not in st.session_state:
st.session_state["messages"] = [{"role": "assistant", "content": "How can I help you?"}]
for msg in st.session_state.messages:
st.chat_message(msg["role"]).write(msg["content"])
if prompt := st.chat_input():
st.chat_message("user").write(prompt)
st.session_state.messages.append({"role": "user", "content": prompt})
with st.chat_message("assistant"):
st.write(":brain: thinking...")
response = agent.run(prompt)
st.session_state["messages"].append({"role": "assistant", "content": response})
st.write(response)
### Description
It works fine in local but when we deploy on ec2 server it is giving error:
NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
Traceback:
File "/home/adm683533/chatcsv/env/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 535, in _run_script
exec(code, module.__dict__)
File "/home/adm683533/chatcsv/chatcsv_v2/app7.py", line 80, in <module>
response = agent.run(prompt)
File "/home/adm683533/chatcsv/env/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/home/adm683533/chatcsv/env/lib/python3.10/site-packages/langchain/chains/base.py", line 538, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/home/adm683533/chatcsv/env/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
### System Info
langchain version 0.1.0
python version 3.10
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}} | https://api.github.com/repos/langchain-ai/langchain/issues/16197/comments | 1 | 2024-01-18T12:54:06Z | 2024-04-25T16:20:44Z | https://github.com/langchain-ai/langchain/issues/16197 | 2,088,230,189 | 16,197 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
I need to have access to the invoke method of langchain.agents.agent.AgentExecutor class. In the documentation, I can see only the definition but not the source code (please see below). It would be super helpful to be able to see the source code. I am getting some errors from this method which I cannot resolve.
async ainvoke(input: Dict[str, Any], config: Optional[[RunnableConfig](https://api.python.langchain.com/en/stable/runnables/langchain_core.runnables.config.RunnableConfig.html#langchain_core.runnables.config.RunnableConfig)] = None, **kwargs: Any) → Dict[str, Any]
Link to the documentation: https://api.python.langchain.com/en/stable/agents/langchain.agents.agent.AgentExecutor.html#langchain.agents.agent.AgentExecutor
Thank you so much!
### Idea or request for content:
_No response_ | DOC: documentation missing for invoke method of AgentExecutor class | https://api.github.com/repos/langchain-ai/langchain/issues/16195/comments | 1 | 2024-01-18T12:37:04Z | 2024-04-25T16:24:06Z | https://github.com/langchain-ai/langchain/issues/16195 | 2,088,201,613 | 16,195 |
[
"hwchase17",
"langchain"
] | below is my code
```
class CustomCallbackHandler(StreamingStdOutCallbackHandler):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
# Implement here your streaming logic
print(token, end='', flush=True)
callback = CustomCallbackHandler()
def regenerate_custom_prompt(chroma_db_path=None,query=None,name=None,not_uuid=None):
check = query.lower()
embedding = OpenAIEmbeddings()
vectordb = Chroma(persist_directory=chroma_db_path, embedding_function=embedding)
llm = ChatOpenAI(model="gpt-3.5-turbo-16k",temperature=0.1,streaming=True,callbacks=[callback],)
retriever = vectordb.as_retriever(search_type="mmr", search_kwargs={"k":4})
qa = MultiQueryRetriever.from_llm(llm=llm, retriever=retriever)
``` | How can I implement Chat Streaming in Langchain? | https://api.github.com/repos/langchain-ai/langchain/issues/16188/comments | 2 | 2024-01-18T10:43:43Z | 2024-01-18T17:23:16Z | https://github.com/langchain-ai/langchain/issues/16188 | 2,088,009,703 | 16,188 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [ ] I added a very descriptive title to this issue.
- [x] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
```
from langchain_community.llms.vertexai import VertexAI
VertexAI(model_name=GEMINI_PRO_MODEL_NAME, credentials=credentials, project=project_id,
streaming=streaming, temperature=0.0, n=1,
max_output_tokens=1000)
```
### Description
I am using the langchain latest version , and I am not able to pass the safety attributes to override the safety categories for gemini pro
### System Info
```
langchain==0.1.1
langchain-community==0.0.13
langchain-core==0.1.12
langchain-google-genai==0.0.6
```
`Python 3.9.5
`
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async | Gemini-Pro : Safety attributes params are missing with VertexAI | https://api.github.com/repos/langchain-ai/langchain/issues/16186/comments | 2 | 2024-01-18T09:38:54Z | 2024-04-29T16:13:08Z | https://github.com/langchain-ai/langchain/issues/16186 | 2,087,891,087 | 16,186 |
[
"hwchase17",
"langchain"
] | ### Issue with current documentation:
from qdrant_client import QdrantClient, models
import os
import qdrant_client
from dotenv import load_dotenv
from langchain.chains import RetrievalQA
from langchain.embeddings.huggingface import HuggingFaceInstructEmbeddings
from langchain.llms import OpenAI
from langchain.vectorstores import Qdrant
from langchain_core.documents import Document
COLLECTION_NAME = "VIP666"
client = QdrantClient(host="192.168.0.3", port=6333)
client.create_collection(
collection_name=COLLECTION_NAME,
vectors_config=models.VectorParams(
distance=models.Distance.COSINE,
size=384),
optimizers_config=models.OptimizersConfigDiff(memmap_threshold=0),
hnsw_config=models.HnswConfigDiff(on_disk=True, m=16, ef_construct=100)
)
os.environ['OPENAI_API_KEY'] = "key"
def get_vector_store():
client = qdrant_client.QdrantClient(
os.getenv('QDRANT_HOST'),
)
embeddings = HuggingFaceInstructEmbeddings(model_name="all-MiniLM-L6-v2")
vector_store = Qdrant(
client=client,
collection_name=os.getenv('QDRANT_COLLECTION_NAME'),
embeddings=embeddings,
)
return vector_store
def main():
load_dotenv()
vectorstore = get_vector_store()
documents = []
qa = RetrievalQA.from_chain_type(
llm=OpenAI(
temperature=0.7,
max_tokens=100,
),
chain_type="stuff",
retriever=vectorstore.as_retriever(search_type="similarity_score_threshold",
search_kwargs={"score_threshold": 0.6, "k": 3},
),
)
while True:
user_question = input("ask:")
document = Document(page_content=user_question, metadata={'source': 'user'})
documents.append(document)
answer = qa.invoke(user_question)
print(f"Ai: {answer}")
vectorstore.add_documents(documents)
if user_question == "bye":
break
if __name__ == "__main__":
main()
### Idea or request for content:
I have established a collection using Qdrant, and set memmap_threshold=0, which means all vectors will be stored locally. I want to implement long-term memory functionality, but I am hindered by the limitation of max_seq_length being set to 512. How can I use text splitting techniques to ensure complete storage of my vector data? Alternatively, how should I configure max_seq_length? | I have established a collection using Qdrant, and set memmap_threshold=0, which means all vectors will be stored locally. I want to implement long-term memory functionality, but I am hindered by the limitation of max_seq_length being set to 512. How can I use text splitting techniques to ensure complete storage of my vector data? Alternatively, how should I configure max_seq_length? | https://api.github.com/repos/langchain-ai/langchain/issues/16183/comments | 1 | 2024-01-18T06:57:16Z | 2024-04-25T16:38:37Z | https://github.com/langchain-ai/langchain/issues/16183 | 2,087,642,831 | 16,183 |
[
"hwchase17",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
### Example Code
with get_openai_callback() as cb
for chunk in llm.stream(message):
print(chunk)
print(cb)
### Description
I am trying to get the token consumes details for a call but it is returning me
Tokens Used: 0
Prompt Tokens: 0
Completion Tokens: 0
Successful Requests: 0
### System Info
langchain latest
openai latest
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async | get_openai_callback does not returns the tokens consumed when we stream the output from LLM | https://api.github.com/repos/langchain-ai/langchain/issues/16182/comments | 1 | 2024-01-18T06:16:32Z | 2024-04-25T16:17:13Z | https://github.com/langchain-ai/langchain/issues/16182 | 2,087,589,101 | 16,182 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.