issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
]
| ### Feature request
Hi there,
Currently using PGVector you can pass a filter object. But this filter object only allows key value pairs.
`Dict[str, str].
`
I am requesting the ability to also send a list of strings for easy filter across many pieces of data.
`Dict[str, list[str]]
`
### Motivation
I have a lot of Data with Metadata that contains content type.
I would like to provide a list of content types to PGvector and allow it to filter and return content from multiple types.
### Your contribution
N/A | Pgvector support to filter by List | https://api.github.com/repos/langchain-ai/langchain/issues/9726/comments | 3 | 2023-08-24T22:50:34Z | 2023-12-11T16:05:58Z | https://github.com/langchain-ai/langchain/issues/9726 | 1,865,993,227 | 9,726 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain Version: 0.0.258
Python Version 3.8.10
Ubuntu 20.04.5 LTS
We are following the instructions from the blog posted at https://python.langchain.com/docs/use_cases/question_answering/
We find that this works on small documents/directories. However when we run it on larger data sets, we get rate limit errors as below:
```
Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised RateLimitError: Rate limit reached for default-text-embedding-ada-002 in organization org-RVhTeiJGcKtuLfYUSO6rLABk on tokens per min. Limit: 1000000 / min. Current: 899517 / min. Contact us through our help center at help.openai.com if you continue to have issues..
Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised RateLimitError: Rate limit reached for default-text-embedding-ada-002 in organization org-RVhTeiJGcKtuLfYUSO6rLABk on tokens per min. Limit: 1000000 / min. Current: 887493 / min. Contact us through our help center at help.openai.com if you continue to have issues..
```
Since we have a paid account with OpenAI we doubt we are running into any actual issues with OpenAI. Looking at their dashboird, we see we are well under any limits.
Our full code that demonstrates this issue is posted below:
```
import os
import sys
import environ
from langchain.document_loaders import DirectoryLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.chains import RetrievalQA
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
env = environ.Env()
environ.Env.read_env()
# Load API key and document location
OPENAI_API_KEY = env('OPENAI_API_KEY')
if OPENAI_API_KEY == "":
print("Missing OpenAPI key")
exit()
print("Using OpenAPI with key ["+OPENAI_API_KEY+"]")
path = sys.argv[1]
if path == "":
print("Missing document path")
exit()
# Document loading
loader = DirectoryLoader(path, glob="*")
data = loader.load()
# Text splitting
text_splitter = RecursiveCharacterTextSplitter(chunk_size = 500, chunk_overlap = 0)
all_splits = text_splitter.split_documents(data)
# Create retriever
vectorstore = Chroma.from_documents(documents=all_splits, embedding=OpenAIEmbeddings())
# Connect to LLM for generation
template = """Use the following pieces of context to answer the question at the end.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
Use three sentences maximum and keep the answer as concise as possible.
Always say "thanks for asking!" at the end of the answer.
{context}
Question: {question}
Helpful Answer:"""
QA_CHAIN_PROMPT = PromptTemplate.from_template(template)
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
qa_chain = RetrievalQA.from_chain_type(
llm,
retriever=vectorstore.as_retriever(),
chain_type_kwargs={"prompt": QA_CHAIN_PROMPT}
)
# prompt loop
def get_prompt():
print("Type 'exit' to quit")
while True:
prompt = input("Enter a prompt: ")
if prompt.lower() == 'exit':
print('Exiting...')
break
else:
try:
result = qa_chain({"query": prompt})
print(result["result"])
except Exception as e:
print(e)
get_prompt()
```
### Who can help?
@eyurtsev @hwchase17 @agola11
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
You can run the code posted above against any document/directory that is large... try with something over 50MB
### Expected behavior
A correct chat reply | Langchain QA over large documents results in Rate limit errors | https://api.github.com/repos/langchain-ai/langchain/issues/9717/comments | 6 | 2023-08-24T19:54:13Z | 2023-12-12T16:33:44Z | https://github.com/langchain-ai/langchain/issues/9717 | 1,865,791,331 | 9,717 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
It looks like it's creating new dataframes to answer complex questions, but in the answer it provides it keeps referencing the variable name rather than the printed variable.
Does the current agent not have the ability to create and use new variables?
Note: I'm using the CSV agent as a tool within another agent
### Suggestion:
Will the CSV agent having the create file tool alleviate this? | Issue: pandas agent tries to create new variables but returns along the lines of "the top 10 are {top_10_df}" | https://api.github.com/repos/langchain-ai/langchain/issues/9715/comments | 8 | 2023-08-24T19:07:43Z | 2024-07-04T16:06:38Z | https://github.com/langchain-ai/langchain/issues/9715 | 1,865,729,449 | 9,715 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain Version: 0.0.271, Python 3.11
Like the title says, _aget_relevant_documents isn't implemented in ParentDocumentRetriever so async calls are not working. It throws an NotImplementedError in BaseRetriever:
```
async def _aget_relevant_documents(
self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun
) -> List[Document]:
"""Asynchronously get documents relevant to a query.
Args:
query: String to find relevant documents for
run_manager: The callbacks handler to use
Returns:
List of relevant documents
"""
raise NotImplementedError()
```
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just do a chain.acall from a chain using documents, (like a ConverstionalRetrievalChain), and it should trigger this error.
### Expected behavior
Should be able to call the chain async when using ParentDocumentRetriever. | ParentDocumentRetriever doesn't implement BaseRetriever._aget_relevant_documents | https://api.github.com/repos/langchain-ai/langchain/issues/9707/comments | 5 | 2023-08-24T15:28:16Z | 2023-12-01T16:07:43Z | https://github.com/langchain-ai/langchain/issues/9707 | 1,865,405,504 | 9,707 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
The "Creating a Custom Prompt Template" documentation is outdated with Pydantic v2.
```
class FunctionExplainerPromptTemplate(StringPromptTemplate, BaseModel):
"""A custom prompt template that takes in the function name as input, and formats the prompt template to provide the source code of the function."""
@validator("input_variables")
def validate_input_variables(cls, v):
"""Validate that the input variables are correct."""
if len(v) != 1 or "function_name" not in v:
raise ValueError("function_name must be the only input_variable.")
return v
```
The above code raises following TypeError:
```
> class FunctionExplainerPromptTemplate(StringPromptTemplate, BaseModel):
E TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases
```
Frankly, I have no idea how to get the prompt templates to work with new Pydantic, even after changing @validator to @field_validator.
### Idea or request for content:
Update the documentation to include new version of Pydantic in examples. | DOC: Custom Templates issue with Pydantic v2 | https://api.github.com/repos/langchain-ai/langchain/issues/9702/comments | 20 | 2023-08-24T14:13:41Z | 2024-02-16T16:09:02Z | https://github.com/langchain-ai/langchain/issues/9702 | 1,865,260,343 | 9,702 |
[
"hwchase17",
"langchain"
]
| ### System Info
Platform - AWS
Python version: 3.11.4
OS - Mac
### Who can help?
@3coins , @hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The code sample which produces the bug:
```
body = json.dumps({"prompt": prompt, "max_tokens_to_sample": 8192})
accept = "application/json"
contentType = "application/json"
response_claudeV2 = boto3_bedrock.invoke_model(
body=body, modelId="anthropic.claude-v2", accept=accept, contentType=contentType
)
response_body_claudeV2 = json.loads(response_claudeV2.get("body").read())
print(response_body_claudeV2.get("completion"))
```
When the above code snippet is executed with boto3_client we get the error:
` botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: 8192 is not less or equal to 8191, please reformat your input and try again.`
As per the doc of anthropic claude the claude-instant can output 9k tokens, claude-v1 can output 12k tokens and claude-v2 can output 12k tokens but when 'max_token_sample' parameter exceeds 8191 values they give the above error.
### Expected behavior
Expected behaviour is that the model should give the output as it can produce max 12K tokens. | Calude models not able to output more than 8191 tokens. | https://api.github.com/repos/langchain-ai/langchain/issues/9697/comments | 6 | 2023-08-24T12:21:25Z | 2024-03-14T06:18:03Z | https://github.com/langchain-ai/langchain/issues/9697 | 1,865,041,669 | 9,697 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Initialization with Database Connection: When an instance of the PGVector class is created, it automatically establish a connection with the PostgreSQL Vector database.
Method for Closing Connection: we need to implement a method within the PGVector class that allows you to close the established connection with the PostgreSQL database.
`def __del__(self):
# Close the session (and thus the connection) when the instance is destroyed.
self.session.close()`
### Motivation
The problem is, I am unable to close a connection and the pool get overload with multiple connections and hence the service starts throwing error
### Your contribution
I guess, may be. | No way to Close an open connection in PGVector.py | https://api.github.com/repos/langchain-ai/langchain/issues/9696/comments | 3 | 2023-08-24T11:57:09Z | 2023-11-15T20:34:38Z | https://github.com/langchain-ai/langchain/issues/9696 | 1,865,001,390 | 9,696 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
It will always output
```
responded: {content}
```
I suppose this [line](https://github.com/langchain-ai/langchain/blob/v0.0.271/libs/langchain/langchain/agents/openai_functions_agent/base.py#L130) is not correct.
### Suggestion:
Is the following code the way you need?
```
content_msg = f"responded: {message.content}\n" if message.content else "\n"
``` | Agent by AgentType.OPENAI_FUNCTIONS cannot output message content. | https://api.github.com/repos/langchain-ai/langchain/issues/9695/comments | 1 | 2023-08-24T10:03:15Z | 2023-11-30T16:06:01Z | https://github.com/langchain-ai/langchain/issues/9695 | 1,864,822,137 | 9,695 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
eroory msg:
Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIConnectionError: Error communicating with OpenAI: HTTPSConnectionPool(host='api.openai.com', port=443): Max retries exceeded with url: /v1/chat/completions (Caused by ProxyError('Cannot connect to proxy.', RemoteDisconnected('Remote end closed connection without response'))).
code:
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
llm = OpenAI()
chat_model = ChatOpenAI()
chat_model.predict("hi!")
export OPENAI_API_KEY="sk-xxxxxxujOoH";
### Suggestion:
_No response_ | Issue: APIConnectionError: Error communicating with OpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/9688/comments | 2 | 2023-08-24T06:59:49Z | 2023-11-30T02:15:30Z | https://github.com/langchain-ai/langchain/issues/9688 | 1,864,533,468 | 9,688 |
[
"hwchase17",
"langchain"
]
| 
The Prompt was to explain tables in the data. in this case it should query into the SQL database. | DQL DB Langchain : When running an db_run query based on the prompt it should execute sql query only when needed. | https://api.github.com/repos/langchain-ai/langchain/issues/9686/comments | 5 | 2023-08-24T06:35:24Z | 2024-01-24T10:41:11Z | https://github.com/langchain-ai/langchain/issues/9686 | 1,864,499,596 | 9,686 |
[
"hwchase17",
"langchain"
]
| ### System Info
> Entering new AgentExecutor chain...
Invoking: `duckduckgo_search` with `2023年8月的新闻`
An error occurred: 'DDGS' object does not support the context manager protocol
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# 初始化OpenAI Functions代理
agent = initialize_agent(
# tools,
tools=load_tools(["ddg-search"]),
llm=llm,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True,
agent_kwargs=agent_kwargs,
memory=memory,
max_iterations=10,
early_stopping_method="generate",
handle_parsing_errors=True, # 初始化代理并处理解析错误
callbacks=[handler],
)
> Entering new AgentExecutor chain...
Invoking: `duckduckgo_search` with `2023年8月的新闻`
An error occurred: 'DDGS' object does not support the context manager protocol
### Expected behavior
该类提供了通过 [DuckDuckGo](https://duckduckgo.com/) 搜索引擎搜索的功能。
from langchain.tools import DuckDuckGoSearchRun
search = DuckDuckGoSearchRun()
search.run("Who is winner of FIFA worldcup 2018?")
你应该期望如下输出:
The 2018 FIFA World Cup was the 21st FIFA World Cup, ... Mario Zagallo (Brazil) and Franz Beckenbauer (Germany) have also achieved the feat. | An error occurred: 'DDGS' object does not support the context manager protocol | https://api.github.com/repos/langchain-ai/langchain/issues/9685/comments | 2 | 2023-08-24T05:50:09Z | 2023-11-30T16:06:11Z | https://github.com/langchain-ai/langchain/issues/9685 | 1,864,451,518 | 9,685 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Enable the option to specify arbitrary keyword arguments (e.g. gpu_memory_utilization=0.98) in langchain.llms.VLLM() constructor
### Motivation
Currently, vllm library provides many useful keyword arguments which enables a lot of usages on a wide variety of devices, but langchain doesn't.
e.g. Some models do not work with GPUs with smaller memory as the default gpu_memory_utilization = 0.9, increasing this limit could enable the use of those models on the smaller GPUs.

### Your contribution
I can contribute by submitting a PR for that. | Allow specifying arbitrary keyword arguments in `langchain.llms.VLLM` | https://api.github.com/repos/langchain-ai/langchain/issues/9682/comments | 2 | 2023-08-24T05:41:55Z | 2023-11-30T16:06:16Z | https://github.com/langchain-ai/langchain/issues/9682 | 1,864,443,617 | 9,682 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version == 0.0.271
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
chat_history = MessagesPlaceholder(variable_name="chat_history")
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
llm = ChatOpenAI(temperature=0, model_name='gpt-3.5-turbo')
agent_chain = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
memory=memory,
max_iterations=1,
early_stopping_method='generate',
handle_parsing_errors=True,
system_prompt="You are an assistant named Mikey",
agent_kwargs={
"memory_prompts": [chat_history],
"input_variables": ["input", "agent_scratchpad", "chat_history"],
}
)
agent_chain.run(input="My name is Dev")
agent_chain.run(input="What is my name")
### Expected behavior
Expected behavior
final answer: Your name is Dev
Output behavior:
Final answer: I don't have access to personal information like your name. Is there anything else I can help you with?
Note: What happens is that the memory is not being passed along, so the model doesn't know my name | initialize_agent not saving and returning messages in memory | https://api.github.com/repos/langchain-ai/langchain/issues/9681/comments | 1 | 2023-08-24T05:30:13Z | 2023-08-24T05:40:45Z | https://github.com/langchain-ai/langchain/issues/9681 | 1,864,433,344 | 9,681 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
The basic documents for the output parser are out of date for pydantic. Tried with python 3.9 and 3.11.
```
pip install langchain
pip install python-dotenv
```
The copied code:
```
from langchain.prompts import PromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from dotenv import load_dotenv
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field, validator
from typing import List
load_dotenv()
model_name = 'text-davinci-003'
temperature = 0.0
model = OpenAI(model_name=model_name, temperature=temperature)
# Define your desired data structure.
class Joke(BaseModel):
setup: str = Field(description="question to set up a joke")
punchline: str = Field(description="answer to resolve the joke")
# You can add custom validation logic easily with Pydantic.
@validator('setup')
def question_ends_with_question_mark(cls, field):
if field[-1] != '?':
raise ValueError("Badly formed question!")
return field
# Set up a parser + inject instructions into the prompt template.
parser = PydanticOutputParser(pydantic_object=Joke)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
# And a query intended to prompt a language model to populate the data structure.
joke_query = "Tell me a joke."
_input = prompt.format_prompt(query=joke_query)
output = model(_input.to_string())
parser.parse(output)
```
results in:
```
<input>:21: PydanticDeprecatedSince20: Pydantic V1 style `@validator` validators are deprecated. You should migrate to Pydantic V2 style `@field_validator` validators, see the migration guide for more details. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.3/migration/
Traceback (most recent call last):
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevconsole.py", line 364, in runcode
coro = func()
File "<input>", line 16, in <module>
File "/Users/travisbarton/opt/anaconda3/envs/scratch39/lib/python3.9/site-packages/pydantic/_internal/_model_construction.py", line 130, in __new__
cls.__pydantic_decorators__ = DecoratorInfos.build(cls)
File "/Users/travisbarton/opt/anaconda3/envs/scratch39/lib/python3.9/site-packages/pydantic/_internal/_decorators.py", line 441, in build
res.validators[var_name] = Decorator.build(
File "/Users/travisbarton/opt/anaconda3/envs/scratch39/lib/python3.9/site-packages/pydantic/_internal/_decorators.py", line 249, in build
func = shim(func)
File "/Users/travisbarton/opt/anaconda3/envs/scratch39/lib/python3.9/site-packages/pydantic/_internal/_decorators_v1.py", line 77, in make_generic_v1_field_validator
raise PydanticUserError(
pydantic.errors.PydanticUserError: The `field` and `config` parameters are not available in Pydantic V2, please use the `info` parameter instead.
For further information visit https://errors.pydantic.dev/2.3/u/validator-field-config-info
```
### Idea or request for content:
Maybe fix the validator to match pydantic? Perhaps I'm mistaken? | DOC: <Please write a comprehensive title after the 'DOC: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/9680/comments | 2 | 2023-08-24T05:17:48Z | 2023-11-30T16:06:21Z | https://github.com/langchain-ai/langchain/issues/9680 | 1,864,422,532 | 9,680 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version == 0.0.271
### Who can help?
@hw
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
chat_history = MessagesPlaceholder(variable_name="chat_history")
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
llm = ChatOpenAI(temperature=0, model_name='gpt-3.5-turbo')
agent_chain = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
memory=memory,
max_iterations=1,
early_stopping_method='generate',
handle_parsing_errors=True,
system_prompt="You are an assistant named Mikey",
agent_kwargs={
"memory_prompts": [chat_history],
"input_variables": ["input", "agent_scratchpad", "chat_history"],
}
)
agent_chain.run(input="What is my name")
### Expected behavior
final answer: Your name is ....
What happens is that the memory is not being returned(the name was given to the model in a previous run) so the model doesn't know my name | Initialize_agent not storing messages when memory is present | https://api.github.com/repos/langchain-ai/langchain/issues/9679/comments | 1 | 2023-08-24T05:04:25Z | 2023-08-24T05:25:36Z | https://github.com/langchain-ai/langchain/issues/9679 | 1,864,411,414 | 9,679 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I tried to use Erine Bot as an agent like below
```python
tools = [faq_tool, query_order_tool]
FORMAT_INSTRUCTIONS = """要使用工具,请按照以下格式返回:
Thought: 我是否需要使用一个工具? 是
Action: 采取的action应该是以下之一:[{tool_names}]
Action Input: action的输入参数
Observation: action的返回结果
如果你不需要使用工具, 请按以下格式返回:
Thought: 我是否需要使用工具? 否
{ai_prefix}: [把你的回复放在这里]
"""
PREFIX = """用中文回答以下问题. 如果找不到答案回答 '我不知道'. 你可以使用以下工具:"""
prompt_ = ConversationalAgent.create_prompt(
tools,
prefix = PREFIX,
format_instructions = FORMAT_INSTRUCTIONS,
input_variables=["input", "chat_history","agent_scratchpad"]
)
memory = ConversationBufferWindowMemory(memory_key="chat_history", return_messages=True, k=3)
chatllmChain = LLMChain(llm=ErnieBotChat(), callbacks=[self.logHandler], verbose = True)
agent = ConversationalAgent(llm_chain=chatllmChain)
callback_manager = CallbackManager([self.logHandler])
self.conversation_agent = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools,memory=memory, callback_manager = callback_manager, verbose=True, max_iterations=3)
```
Then I asked a question "你好,我应该怎么申请赔偿"
The answer returned by the agent is:
Could not parse LLM output: `我不知道,但我会将你的问题传递给对应的工具来获取答案。我发送了一个消息到[FAQ](%E8%BF%94%E5%9B%9E)和[search_order](%E8%BF%94%E5%9B%9E)。请等待他们的回复。`
Which is not correct. The correct behavior is call the faq tool and get relevant information to form a reply. Which was proven to be work when using Open API chat model.
@axiangcoding Any idea of how to make Ernie Bot to be able to handle this situation? Thanks in advance!!!
### Suggestion:
_No response_ | Issue: ERNIE Bot is not able to call tool | https://api.github.com/repos/langchain-ai/langchain/issues/9678/comments | 5 | 2023-08-24T03:51:14Z | 2023-11-30T16:06:26Z | https://github.com/langchain-ai/langchain/issues/9678 | 1,864,353,926 | 9,678 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
_No response_
### Idea or request for content:
_No response_ | Please share a complete document PDF OR markdown etc about langchain | https://api.github.com/repos/langchain-ai/langchain/issues/9677/comments | 2 | 2023-08-24T03:07:12Z | 2023-11-30T16:06:31Z | https://github.com/langchain-ai/langchain/issues/9677 | 1,864,308,955 | 9,677 |
[
"hwchase17",
"langchain"
]
| ### Feature request
We are currently using OpenAI's ChatCompletion API with custom ChatPromptTemplate, as converting langchain's ChatPromptTemplate to dict (or vice versa) seems not working well.
So I'd like to suggest new feature, to enable loading [ChatPromptTemplate](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/prompts/chat.py) from config dict,
by adding loading function (or classmethod in ChatPromptTemplate) to [prompts/loading.py](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/prompts/loading.py#L151-L155).
Or if there is standardize way to easily convert dict to ChatPromptTemplate, please let me know.
### Motivation
We are currently using OpenAI's ChatCompletion API with custom ChatPromptTemplate, as converting langchain's ChatPromptTemplate to dict (or vice versa) seems not working well.
As Utilizing OpenAI's ChatCompletion API (other than Completion API) becoming mainstream, I thought it might be good to support standard loading method for ChatPromptTemplate.
### Your contribution
If you allow me, I'd like to make a contribution related to this feature. | Support load ChatPromptTemplate from config dict | https://api.github.com/repos/langchain-ai/langchain/issues/9676/comments | 2 | 2023-08-24T03:04:38Z | 2023-11-30T16:06:36Z | https://github.com/langchain-ai/langchain/issues/9676 | 1,864,307,161 | 9,676 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
langchain==0.0.250
platform = macOS
Python version 3.11
I am not sure if this is by design, I think it is not and hence reporting this as an issue.
When looping through a list of pdf files to get a summary for each I am creating an index using - VectorstoreIndexCreator().from_documents(pages).
The issue is for each subsequent file, data(documents) from previous file are also being retrieved to be passed on to GPT and end up in the summary.
Code:
from langchain.indexes import VectorstoreIndexCreator
from langchain.chains import RetrievalQA
from langchain.chat_models import ChatOpenAI
for file in files:
loader = PyPDFLoader(file['path'])
pages = loader.load_and_split()
index = VectorstoreIndexCreator().from_documents(pages)
retriever = index.vectorstore.as_retriever(search_type='mmr')
retriever.search_kwargs['k'] = 10
llm = ChatOpenAI()
aca_qa = RetrievalQA.from_chain_type(
llm=llm,
retriever=retriever,
chain_type='stuff',
# return_source_documents=True,
)
result = aca_qa({'query': summary_query})
### Suggestion:
Temp fix:
The resolution is to include - index = '' at the beginning of each loop cycle | Issue: vectorstore hence indexes and embeddings are persisting when they should not be. | https://api.github.com/repos/langchain-ai/langchain/issues/9668/comments | 3 | 2023-08-23T23:32:29Z | 2023-11-30T16:06:41Z | https://github.com/langchain-ai/langchain/issues/9668 | 1,864,147,213 | 9,668 |
[
"hwchase17",
"langchain"
]
| ### System Info
- Version: 0.0.271
- Platform: Macbook Pro M1 macos 13.5
- Python Version: 3.11.4
### Who can help?
@...
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce the behavior:
1. Copy the existing code from [Langchain Document - Json Agent](https://python.langchain.com/docs/integrations/toolkits/json)
2. Replace the llm model from OpenAI to GPT4All
3. Use the model [ggml-gpt4all-j-v1.3-groovy.bin](https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin)
4. Download the [JSON FIle](https://github.com/OAI/OpenAPI-Specification/blob/main/examples/v2.0/json/petstore.json) and provide the path in the script
5. Replace the question with `What are the required parameters in the request body to the /pets endpoint?`.
6. Output: `File venv/lib/python3.11/site-packages/langchain/agents/mrkl/output_parser.py", line 61, in parse
raise OutputParserException(
langchain.schema.output_parser.OutputParserException: Could not parse LLM output: 'Action: json_spec_list_keys(data)'`
### Expected behavior
The agent should parse through and provide the answer. | Could not parse LLM output: `Action: json_spec_list_keys(data)` | https://api.github.com/repos/langchain-ai/langchain/issues/9658/comments | 2 | 2023-08-23T15:39:38Z | 2023-11-29T16:06:24Z | https://github.com/langchain-ai/langchain/issues/9658 | 1,863,582,262 | 9,658 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Modules -> prompt templates
This set code which is there in documentation is throwing error
`from langchain import PromptTemplate`
`invalid_prompt = PromptTemplate(`
` input_variables=["adjective"],`
` template="Tell me a {adjective} joke about {content}."`
`)`
Because here in the input variables 'content' word is missing , as it is also one of the input variable to the prompt template.
error -

**update the code -**
`from langchain import PromptTemplate`
`invalid_prompt = PromptTemplate(`
` input_variables=["adjective","content"],`
` template="Tell me a {adjective} joke about {content}."`
`)`
| Error in the code given in the prompt template | https://api.github.com/repos/langchain-ai/langchain/issues/9656/comments | 3 | 2023-08-23T15:02:55Z | 2023-12-04T16:05:28Z | https://github.com/langchain-ai/langchain/issues/9656 | 1,863,519,793 | 9,656 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.235 / python 3.10.11 /
File: libs/langchain/langchain/output_parsers/json.py
There is a bug in the function `parse_json_markdown`.
When the input json_string contains \`\`\` $code \`\`\` . It mistakenly interprets $code as json format and fails to parse.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. make LLM answer with text and code (in \`\`\` $code \`\`\` format)
2. The output_parser will parse the $code instead of \`\`\` json \`\`\`
3. FAIL
### Expected behavior
maybe use re.findall and get the last search result
EX:
```
match = re.findall(r"```(json)?(.*?)```", json_string, re.DOTALL)
``` | output_parser has a bug while the output string has ``` code ``` | https://api.github.com/repos/langchain-ai/langchain/issues/9654/comments | 1 | 2023-08-23T11:58:25Z | 2023-11-29T16:06:34Z | https://github.com/langchain-ai/langchain/issues/9654 | 1,863,187,495 | 9,654 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Link not working: [OpenAPI](https://python.langchain.com/docs/use_cases/apis/openapi_openai)
where is the link? https://python.langchain.com/docs/modules/chains/how_to/openai_functions
### Idea or request for content:
_No response_ | DOC: Not working: <OpenAPI spec and create + execute valid requests against the API> | https://api.github.com/repos/langchain-ai/langchain/issues/9653/comments | 2 | 2023-08-23T11:28:20Z | 2023-11-29T16:06:39Z | https://github.com/langchain-ai/langchain/issues/9653 | 1,863,141,243 | 9,653 |
[
"hwchase17",
"langchain"
]
| I wrote a ChatGLM class that inherits LLM class as below.
```
class ChatGLM(LLM):
def _call(self, prompt: str,
stop: Optional[List[str]] = None) -> str:
message = [{"role": "user", "content": prompt}]
payload = {"model": "string", ...}
headers = {"Content-Type": "application/json"}
response = requests.post(url, json=payload, headers=headers)
return response.json()['choices'][0]['message']['content']
```
And I hope to use the ChatGLM class I wrote to replace the previously used OpenAI class during sql querying with SQLDatabaseChain:
```
llm = ChatGLM(temperature=0, verbose=True)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, use_query_checker=True,
return_intermediate_steps=False)
db_chain.run("What tables are in the database?")
```
This is the error reported by the code, does anyone know what caused it?
```
> Entering new SQLDatabaseChain chain...
What tables are in the database?
SQLQuery: Here
I'm sorry, but I don't see any SQLite query provided in your message. Could you please provide the SQLite query so I can review it for any common mistakes?
---------------------------------------------------------------------------
OperationalError Traceback (most recent call last)
File C:\Language\Anaconda3\lib\site-packages\sqlalchemy\engine\base.py:1963, in Connection._exec_single_context(self, dialect, context, statement, parameters)
1962 if not evt_handled:
-> 1963 self.dialect.do_execute(
1964 cursor, str_statement, effective_parameters, context
1965 )
1967 if self._has_events or self.engine._has_events:
File C:\Language\Anaconda3\lib\site-packages\sqlalchemy\engine\default.py:920, in DefaultDialect.do_execute(self, cursor, statement, parameters, context)
919 def do_execute(self, cursor, statement, parameters, context=None):
--> 920 cursor.execute(statement, parameters)
OperationalError: near "I": syntax error
......
```
I have located the error to `langchain/chains/sql_database/base.py` ln125, from here chatglm cannot output the correct `sql_cmd`, but I cannot further check what is causing this problem. Any help would be greatly appreciated, thank you.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
import langchain
from langchain.llms.base import LLM
from langchain import OpenAI, SQLDatabase, SQLDatabaseChain
db = SQLDatabase.from_uri('sqlite:///music.db')
class ChatGLM(LLM):
url = "http://region-9.seetacloud.com:23361/v1/chat/completions"
@property
def _llm_type(self) -> str:
return "chatglm2-6b"
def _call(self, prompt: str,
stop: Optional[List[str]] = None) -> str:
message = [{
"role": "user",
"content": prompt
}]
url = "http://region-9.seetacloud.com:23361/v1/chat/completions"
payload = {
"model": "string",
"messages": message,
"temperature":1,
"top_p": 0,
"n": 1,
"max_tokens": 0,
"stream": False
}
headers = {
"Content-Type": "application/json"
}
response = requests.post(url, json=payload, headers=headers)
return response.json()['choices'][0]['message']['content']
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters.
"""
_param_dict = {
"url": self.url
}
return _param_dict
llm = ChatGLM(temperature=0, verbose=True)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, use_query_checker=True,
return_intermediate_steps=False)
db_chain.run("What tables are in the database?")
```
### Expected behavior
return correct sql query | sql_cmd Error: SQLDatabaseChain with ChatGLM2-6B | https://api.github.com/repos/langchain-ai/langchain/issues/9651/comments | 2 | 2023-08-23T10:31:14Z | 2023-11-30T16:06:51Z | https://github.com/langchain-ai/langchain/issues/9651 | 1,863,049,747 | 9,651 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.268
python 3.9
Windows 10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
class Actor(BaseModel):
name: str = Field(description="name of an actor")
film_names: List[str] = Field(description="list of names of films they starred in")
parser = PydanticOutputParser(pydantic_object=Actor)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
chain = LLMChain(llm=ChatModel.get_model(), prompt=prompt, verbose=True)
output = chain.run(query="Generate the filmography for a random actor.", output_parser=parser)
```
Problem: _output_ is a JSON string, and not an Actor object.
### Expected behavior
The method _parse()_ from the OutputParser passed to the chain should be automatically called and return the parsed object, which is not the case. One has to call explicitly `parser.parse(output)` to retrieve an Actor object.
For custom parsers, it seems to work, but not for PydanticOutputParser. I expect the chain's behavior to be consistent across all parsers. | PydanticOutputParser not called by chain | https://api.github.com/repos/langchain-ai/langchain/issues/9650/comments | 2 | 2023-08-23T10:00:55Z | 2023-08-23T11:45:35Z | https://github.com/langchain-ai/langchain/issues/9650 | 1,862,997,610 | 9,650 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am using ConversationalRetrievalChain for the RAG question-answer bot.
There is one LLM call that I have not configured and it is reducing the quality of responses and increasing the time.
The prompt in the LLM call is:
> Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
It is taking around 1 second to complete this call, and it is reducing the quality of the response as well.
How do I stop this call?
### Suggestion:
_No response_ | Issue: How to stop extra LLM call in ConversationalRetrievalChain for question rephrasing | https://api.github.com/repos/langchain-ai/langchain/issues/9649/comments | 17 | 2023-08-23T09:53:18Z | 2024-06-24T20:10:42Z | https://github.com/langchain-ai/langchain/issues/9649 | 1,862,983,742 | 9,649 |
[
"hwchase17",
"langchain"
]
| ### System Info
VLLM from langchain gives the below error and stops executing:
code: `python
from langchain.llms import VLLM
llm = VLLM(model="facebook/opt-125m"
tensor_parallel_size=2,
trust_remote_code=True, # mandatory for hf models
max_new_tokens=128,
top_k=10,
top_p=0.95,
temperature=0.8,
)
print(llm("What is the capital of France ?"))
`
gives the below error when setting tensor_parallel_size=2 and runs successfully if we comment out tensor_parallel_size argument:
error =>
**2023-08-23 08:52:55,683 ERROR services.py:1207 -- Failed to start the dashboard, return code -11
2023-08-23 08:52:55,685 ERROR services.py:1232 -- Error should be written to 'dashboard.log' or 'dashboard.err'. We are printing the last 20 lines for you. See 'https://docs.ray.io/en/master/ray-observability/ray-logging.html#logging-directory-structure' to find where the log file is.
2023-08-23 08:52:55,687 ERROR services.py:1276 --
The last 20 lines of /tmp/ray/session_2023-08-23_08-52-52_882632_28/logs/dashboard.log (it contains the error message from the dashboard):
2023-08-23 08:52:55,607 INFO head.py:242 -- Starting dashboard metrics server on port 44227**
**2023-08-23 08:52:56,847 INFO worker.py:1636 -- Started a local Ray instance.**

VM Details:
CPU : 4 Core
RAM : 13 GB
GPU: Nvidia T4 *2
Instance : Kaggle Kernal
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
### code:
`python
from langchain.llms import VLLM
llm = VLLM(model="facebook/opt-125m"
tensor_parallel_size=2,
trust_remote_code=True, # mandatory for hf models
max_new_tokens=128,
top_k=10,
top_p=0.95,
temperature=0.8,
)
print(llm("What is the capital of France ?"))
`
### On Kaggle Kernal
VM Details:
CPU : 4 Core
RAM : 13 GB
GPU: Nvidia T4 *2
Instance : Kaggle Kernal
### Expected behavior
2023-08-23 08:52:55,683 ERROR services.py:1207 -- Failed to start the dashboard , return code -11
2023-08-23 08:52:55,685 ERROR services.py:1232 -- Error should be written to 'dashboard.log' or 'dashboard.err'. We are printing the last 20 lines for you. See 'https://docs.ray.io/en/master/ray-observability/ray-logging.html#logging-directory-structure' to find where the log file is.
2023-08-23 08:52:55,687 ERROR services.py:1276 --
The last 20 lines of /tmp/ray/session_2023-08-23_08-52-52_882632_28/logs/dashboard.log (it contains the error message from the dashboard):
2023-08-23 08:52:55,607 INFO head.py:242 -- Starting dashboard metrics server on port 44227
2023-08-23 08:52:56,847 INFO worker.py:1636 -- Started a local Ray instance.
and Cell stops running i.e. execution stops | VLLM from langchain.llms | https://api.github.com/repos/langchain-ai/langchain/issues/9646/comments | 3 | 2023-08-23T09:05:03Z | 2023-11-29T16:06:49Z | https://github.com/langchain-ai/langchain/issues/9646 | 1,862,893,898 | 9,646 |
[
"hwchase17",
"langchain"
]
| ### System Info
win10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Load a jpg using UnstructuredImageLoader from Langchain.
2. An error occurred.
### Expected behavior
got the error here:
```
loader:<langchain.document_loaders.image.UnstructuredImageLoader object at 0x000002926EA8EFB0>
Exception in thread Thread-3 (_handle_results):
Traceback (most recent call last):
File "D:\ProgramData\anaconda3\envs\3.10.11\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "D:\ProgramData\anaconda3\envs\3.10.11\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "D:\ProgramData\anaconda3\envs\3.10.11\lib\multiprocessing\pool.py", line 579, in _handle_results
task = get()
File "D:\ProgramData\anaconda3\envs\3.10.11\lib\multiprocessing\connection.py", line 251, in recv
return _ForkingPickler.loads(buf.getbuffer())
TypeError: TesseractNotFoundError.__init__() takes 1 positional argument but 2 were given
``` | Load a jpg using UnstructuredImageLoader from Langchain. | https://api.github.com/repos/langchain-ai/langchain/issues/9644/comments | 2 | 2023-08-23T08:39:20Z | 2023-11-29T16:06:54Z | https://github.com/langchain-ai/langchain/issues/9644 | 1,862,851,384 | 9,644 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
def get_connection():
user=os.environ.get('DB_USERNAME')
database= os.environ.get('DB_DATABASE')
password=os.environ.get('DB_PASSWORD')
host= os.environ.get('DB_HOST')
port= os.environ.get('DB_PORT')
return create_engine(
url="mysql+pymysql://{0}:{1}@{2}:{3}/{4}".format(
user, password, host, port, database
)
)
ef get_whole_conversation(question):
try:
llm = ChatOpenAI(temperature=0,openai_api_key=env('OPENAI_API_KEY'),model='gpt-3.5-turbo')
engine = get_connection()
input_db = SQLDatabase(engine)
db_chain = SQLDatabaseChain.from_llm(llm, input_db, verbose=True)
prompt="""
"""
tools = [Tool(name="Foo-Bar-db",func=db_chain.run,description=prompt)]
agent_kwargs = {"extra_prompt_messages": [MessagesPlaceholder(variable_name="memory")],}
memory = ConversationBufferWindowMemory(memory_key="memory",k=4,return_messages=True)
agent = initialize_agent(tools,llm,agent=AgentType.OPENAI_FUNCTIONS,verbose=True,agent_kwargs=agent_kwargs,memory=memory)
agent.run(question)
### Suggestion:
_No response_ | Issue: I am working with SQLChain and initialize_agent it is not answering from the connected database. | https://api.github.com/repos/langchain-ai/langchain/issues/9641/comments | 4 | 2023-08-23T06:54:38Z | 2023-11-29T16:06:59Z | https://github.com/langchain-ai/langchain/issues/9641 | 1,862,687,289 | 9,641 |
[
"hwchase17",
"langchain"
]
| ### System Info
Win10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
embeddings = HuggingFaceEmbeddings( model_name='sentence-transformers/LaBSE')
vectordb = Milvus(embedding_function=embeddings, connection_args=milvus_settings)
vectordb.add_documents(documents=spilt_results)
```
I got this Error:
Traceback (most recent call last):
File "E:\文件\programs\test_jieba\ingest.py", line 44, in <module>
main()
File "E:\文件\programs\test_jieba\ingest.py", line 39, in main
vectordb.add_documents(documents=spilt_results)
File "E:\文件\programs\test_jieba\.venv\lib\site-packages\langchain\vectorstores\base.py", line 92, in add_documents
return self.add_texts(texts, metadatas, **kwargs)
File "E:\文件\programs\test_jieba\.venv\lib\site-packages\langchain\vectorstores\milvus.py", line 454, in add_texts
insert_list = [insert_dict[x][i:end] for x in self.fields]
File "E:\文件\programs\test_jieba\.venv\lib\site-packages\langchain\vectorstores\milvus.py", line 454, in <listcomp>
insert_list = [insert_dict[x][i:end] for x in self.fields]
KeyError: 'file_path'
### Expected behavior
why is this happen? | When using Langchain to upload a .docx file to the Milvus database, an error occurs. | https://api.github.com/repos/langchain-ai/langchain/issues/9640/comments | 2 | 2023-08-23T06:25:55Z | 2023-11-29T16:07:05Z | https://github.com/langchain-ai/langchain/issues/9640 | 1,862,651,370 | 9,640 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
There is a notebook link in [QA using a Retriever
](https://python.langchain.com/docs/use_cases/question_answering/how_to/vector_db_qa)
that should open a [notebook](https://python.langchain.com/docs/modules/chains/additional/question_answering.html)
But link is boken. Page does not exist.
Can you fix it?
Thanks
### Idea or request for content:
_No response_ | DOC: Broken notebook link | https://api.github.com/repos/langchain-ai/langchain/issues/9639/comments | 2 | 2023-08-23T06:17:03Z | 2023-11-29T16:07:11Z | https://github.com/langchain-ai/langchain/issues/9639 | 1,862,640,640 | 9,639 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.271
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This raises a key error because `word` is treated like an input variable even though it's a partial variables
```python
p = PipelinePromptTemplate(
final_prompt=PromptTemplate.from_template("this {word} work"),
pipeline_prompts=[],
input_variables=[],
partial_variables={"word": "does"}
)
print(p.format())
# or
print(p.partial().format())
```
### Expected behavior
partial_variables should be interpolated properly | PipelinePromptTemplate does not respect partial_variables | https://api.github.com/repos/langchain-ai/langchain/issues/9636/comments | 2 | 2023-08-23T02:31:08Z | 2023-11-29T16:07:15Z | https://github.com/langchain-ai/langchain/issues/9636 | 1,862,450,431 | 9,636 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version: 0.0.266
Python3.11.4
Sample python snippet:
```with open("openapi_request_body.yaml") as f:
raw_api_spec = yaml.load(f, Loader=yaml.Loader)
api_spec = reduce_openapi_spec(raw_api_spec)
requests_wrapper = RequestsWrapper()
from langchain.llms.openai import OpenAI
from langchain.agents.agent_toolkits.openapi import planner
llm = OpenAI(temperature=0.0)
agent = planner.create_openapi_agent(api_spec, requests_wrapper, llm)
while True:
user_inp_query = input("Ask me anything: ")
print('user_inp_query: ', user_inp_query)
user_query = (user_inp_query)
agent.run(user_query)
```
The sample API spec contains 3 APIs: count_cars, locate_license_plate, and timestamps_of_matching_car_configs. However, I notice no matter what question I ask I see the following API planner response:
```Ask me anything: Tell me the timestamps whenever a green car came
-----------------------------------------------
user_inp_query: Tell me the timestamps whenever a green car came
> Entering new AgentExecutor chain...
Action: api_planner
Action Input: I need to find the right API calls to get the timestamps of when a green car came
Observation: 1. POST /count_cars with a query param to search for green cars
2. POST /timestamps_of_matching_car_configs with the query param from the previous call
3. POST /locate_license_plate with the query param from the previous call
Thought: I'm ready to execute the API calls.
Action: api_controller
Action Input: 1. POST /count_cars with a query param to search for green cars
2. POST /timestamps_of_matching_car_configs with the query param from the previous call
3. POST /locate_license_plate with the query param from the previous call
> Entering new AgentExecutor chain...
I need to make a POST request to the /count_cars endpoint with the query params.
```
openAPI spec definition is as follows:
```openapi: 3.0.0
servers:
- url: http://127.0.0.1:5001
info:
title: Car Traffic MetaData Analyzer
description: REST API service which can be used to fetch details like car's color, car's type [like suv, sedan. coupe etc i.e. type of vehicle body] and car's make [toyota, honda, porche etc i.e. manufacturing company's name]. We also store the license plate of the individual cars for each entry.
version: 1.0.0
paths:
/count_cars:
post:
summary: This API takes as an input car's color (like red, green etc), car's vehicle body type (like suv, sedan. coupe etc) and car's make (like toyota, honda, porche etc). All the input values are optional. This API then returns a count of total number of cars which are of that certain color/type/make.
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/car_attributes'
# examples:
# examples1:
# $ref: '#/components/examples/car_attributes_example1'
# examples2:
# $ref: '#/components/examples/car_attributes_example2'
required: True
responses:
'200':
description: Successful response
content:
application/json:
examples:
example1:
summary: example 1
value:
total_count: "15"
example2:
summary: example 2
value:
total_count: "0"
'400':
description: Bad Request
'404':
description: Resource Not Found
/timestamps_of_matching_car_configs:
post:
summary: This API takes as an input car's color (like red, green etc), car's type (like suv, sedan. coupe etc) and car's make (like toyota, honda, porche etc). All the input values are optional. This API then returns two values, found and timestamps. When found=true it means that car config in query has been found and the corresponding timestamps key stores a list of all the timestamp at which this car config was found. If found=False, it means that not such car config can be found.
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/car_attributes'
# examples:
# examples1:
# $ref: '#/components/examples/car_attributes_example1'
# examples2:
# $ref: '#/components/examples/car_attributes_example2'
required: True
responses:
'200':
description: Successful response when the API found none or some entries where the car's color, type and make matches with the parameters passed to it.
content:
application/json:
examples:
example1:
summary: When API endpoint found none relevant entries
value:
found: "False"
example2:
summary: When API endpoint found some relevant entries
value:
found: "True"
timestamps: ["2021-10-04T16:53:11Z", "2021-11-22T06:50:14Z"]
example3:
summary: Another example of when API endpoint found some relevant entries
value:
found: "True"
timestamps: ["2001-10-04T12:23:43Z", "2011-01-29T23:23:29Z", "2001-11-30T00:01:01Z", "2011-01-09T23:59:00Z"]
'400':
description: Bad Request
'404':
description: Resource Not Found
/locate_license_plate:
post:
summary: This API takes as an input license plate of a car. If the car's license plate is present in the database, it return with found=true, and it also returns that car's color, vehicle body type and make i.e. manufacturing company's name. Else if found=false, it means no such car is found in the database.
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/license_plate'
# examples:
# examples1:
# $ref: '#/components/examples/license_plate_example1'
# examples2:
# $ref: '#/components/examples/license_plate_example2'
required: True
responses:
'200':
description: Successful response from the API. From the response found=False means that the license plate was not found in the databse. If found=True it means that the license plate was found.
content:
application/json:
examples:
example1:
summary: When API endpoint can't find license plate in the databse
value:
found: "False"
example2:
summary: When API endpoint found license plate, it returns car's color, type and make.
value:
found: "True"
color: "red"
type: "suv"
make: "honda"
example3:
summary: When API endpoint found license plate, it returns car's color, type and make.
value:
found: "True"
color: "grey"
type: "sedan"
make: "hyundai"
'400':
description: Bad Request
'404':
description: Resource Not Found
components:
schemas:
car_attributes:
properties:
color:
description: color of the car you want to query for.
type: string
type:
description: style of the vehicle body type on whether the car is one of suv, sedan, convertible, truck etc.
type: string
make:
description: manufacturing company of the car
type: string
license_plate:
properties:
license_plate:
description: lincense plate of the car
type: string
parameters:
color:
name: color
in: query
description: color of the car you want to query for.
schema:
type: string
examples:
red:
value: red
green:
value: green
blue:
value: blue
yellow:
value: yellow
type:
name: type
in: query
description: style of the vehicle body type on whether the car is one of suv, sedan, convertible, truck etc.
schema:
type: string
examples:
suv:
value: suv
sedan:
value: sedan
convertible:
value: convertible
truck:
value: truck
make:
name: make
in: query
description: Manufacturing company of the car
schema:
type: string
examples:
subaru:
value: subaru
hyundai:
value: hyundai
toyota:
value: toyota
porche:
value: porche
license_plate:
name: license_plate
in: query
description: Lincense plate of the car
schema:
type: string
examples:
6FFR593:
value: 6FFR593
KVT6282:
value: KVT6282
BHT9205:
value: BHT9205```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
run the code snippet as it is and point the openapi.yaml to the yaml file provided.
### Expected behavior
The API chain is expected to classify the query into one of these APIs and only run that API instead of calling all the APIs in the same sequence everytime. | create_openapi_agent always generating same chain no matter what question | https://api.github.com/repos/langchain-ai/langchain/issues/9634/comments | 2 | 2023-08-23T00:54:53Z | 2023-08-23T05:08:57Z | https://github.com/langchain-ai/langchain/issues/9634 | 1,862,387,605 | 9,634 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Add the option to output progress bar Methods that add documents to an index, e.g., `from_documents`.
### Motivation
Methods that add documents to an index can take a long time to execute, e.g., `from_documents`. When executing from a notebook or CLI it would be very convenient to be able to track the progressing using something like tqdm.
### Your contribution
No | Monitoring the progress of long running vectorDB index updates | https://api.github.com/repos/langchain-ai/langchain/issues/9630/comments | 2 | 2023-08-22T22:08:51Z | 2024-02-25T16:06:52Z | https://github.com/langchain-ai/langchain/issues/9630 | 1,862,271,039 | 9,630 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.246
Python 3.11.4
SQLAlchemy 1.4.39
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.chat_models import ChatOpenAI
from langchain.callbacks import get_openai_callback
from dotenv import load_dotenv, find_dotenv
from sqlalchemy import create_engine, MetaData, Table, Column, Integer, String, ForeignKey
from langchain.agents import initialize_agent
from langchain.sql_database import SQLDatabase
from langchain.chains import SQLDatabaseChain
from langchain.agents import Tool
def count_tokens(agent, query):
with get_openai_callback() as cb:
result = agent(query)
print(f'Spent a total of {cb.total_tokens} tokens')
return result
custom_dotenv_path = './openai.env'
_ = load_dotenv(custom_dotenv_path)
llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo")
engine = create_engine('sqlite:///college.db', echo=True)
meta = MetaData()
students = Table(
'students', meta,
Column('id', Integer, primary_key = True),
Column('firstname', String),
Column('lastname', String),
)
addresses = Table(
'addresses', meta,
Column('id', Integer, primary_key = True),
Column('st_id', Integer, ForeignKey('students.id')),
Column('zipcode', String),
Column('email', String))
meta.create_all(engine)
conn = engine.connect()
conn.execute(students.insert(), [
{'id': 1, 'firstname': 'John', 'lastname': 'Smith'},
{'id': 2, 'firstname': 'Emily', 'lastname': 'Johnson'},
{'id': 3, 'firstname': 'Michael', 'lastname': 'Rodriguez'},
{'id': 4, 'firstname': 'Sarah', 'lastname': 'Kim'},
{'id': 5, 'firstname': 'William', 'lastname': 'Brown'}
])
conn.execute(addresses.insert(), [
{'st_id': 1, 'zipcode': '90210', 'email': '[email protected]'},
{'st_id': 2, 'zipcode': '30301', 'email': '[email protected]'},
{'st_id': 3, 'zipcode': '77001', 'email': '[email protected]'},
{'st_id': 4, 'zipcode': '94101', 'email': '[email protected]'},
{'st_id': 5, 'zipcode': '10001', 'email': '[email protected]'}
])
db = SQLDatabase(engine)
from langchain.prompts.prompt import PromptTemplate
_DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.
Use the following format:
Question: "Question here"
SQLQuery: "SQL Query to run"
SQLResult: "Result of the SQLQuery"
Answer: "Final answer here"
Only use the following tables:
{table_info}
When a user requests to add a new record, adhere to the following steps:
Only have single quotes on any sql command sent to the engine. If you generate 'INSERT' statement for adding any record to the table. Please 'EXECUTE' one statement at a time.
Step 1.Student Table Entry:
Navigate to the 'students' table.
Input the desired first name and last name for the new record.
Step 2.Address Table Entry:
Once the student record is created, retrieve its 'id'.
Move to the 'addresses' table.
Using the retrieved 'id', prepare a new entry ensuring it consists of the 'student id', 'zipcode', and 'email' as initially provided.
Question: {input}"""
PROMPT = PromptTemplate(
input_variables=["input", "table_info", "dialect"], template=_DEFAULT_TEMPLATE
)
db = SQLDatabase(engine)
sql_chain = SQLDatabaseChain.from_llm(llm=llm, db=db, verbose=True,
#use_query_checker=True,
prompt=PROMPT
)
tools =[
Tool(
name='student',
func=sql_chain.run,
description="Useful for when you need to answer questions about new student record"
)
]
zero_shot_agent = initialize_agent(
agent="zero-shot-react-description",
tools=tools,
llm=llm,
verbose=True,
max_iterations=5,
)
result = count_tokens(
zero_shot_agent,
"insert a new record with name Jane Everwood. Her email is '[email protected]' and her zipcode is '99999'."
)
```
### Expected behavior
I asked ChatGPT to generate SQL for me to insert a new student record into the database. This database comprises two tables: students (cols: id, firstname, lastname) and addresses (cols: id, st_id, zipcode, email). When I input "insert a new record with the name Jane Everwood. Her email is '[email protected]' and her zipcode is '99999'", the system should add one record to the students table and one corresponding record to the addresses table. Instead, I received a "sqlite3.Warning: You can only execute one statement at a time" message.
The generated SQL scripts from SQLDatabaseChain appear correct. Could the issue be related to calling "cursor.execute(statement, parameters)" to execute multiple statements? Thanks!
Here is the full message that I got:
```
> Entering new AgentExecutor chain...
I need to insert a new student record with the given information.
Action: student
Action Input: insert new record with name Jane Everwood, email '[email protected]', and zipcode '99999'
> Entering new SQLDatabaseChain chain...
insert new record with name Jane Everwood, email '[email protected]', and zipcode '99999'
SQLQuery:2023-08-22 15:44:43,547 INFO sqlalchemy.engine.Engine SELECT students.id, students.firstname, students.lastname
FROM students
LIMIT ? OFFSET ?
2023-08-22 15:44:43,548 INFO sqlalchemy.engine.Engine [generated in 0.00041s] (3, 0)
2023-08-22 15:44:43,551 INFO sqlalchemy.engine.Engine SELECT addresses.id, addresses.st_id, addresses.zipcode, addresses.email
FROM addresses
LIMIT ? OFFSET ?
2023-08-22 15:44:43,551 INFO sqlalchemy.engine.Engine [generated in 0.00025s] (3, 0)
INSERT INTO students (firstname, lastname) VALUES ('Jane', 'Everwood');
INSERT INTO addresses (st_id, zipcode, email) VALUES ((SELECT id FROM students WHERE firstname = 'Jane' AND lastname = 'Everwood'), '99999', '[email protected]');2023-08-22 15:44:45,160 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2023-08-22 15:44:45,162 INFO sqlalchemy.engine.Engine INSERT INTO students (firstname, lastname) VALUES ('Jane', 'Everwood');
INSERT INTO addresses (st_id, zipcode, email) VALUES ((SELECT id FROM students WHERE firstname = 'Jane' AND lastname = 'Everwood'), '99999', '[email protected]');
2023-08-22 15:44:45,162 INFO sqlalchemy.engine.Engine [generated in 0.00026s] ()
2023-08-22 15:44:45,162 INFO sqlalchemy.engine.Engine ROLLBACK
Traceback (most recent call last):
File "C:\_PyCharmProject\openai\bug_report.py", line 117, in <module>
result = count_tokens(
File "C:\_PyCharmProject\openai\bug_report.py", line 13, in count_tokens
result = agent(query)
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\chains\base.py", line 258, in __call__
raise e
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\chains\base.py", line 252, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\agents\agent.py", line 1029, in _call
next_step_output = self._take_next_step(
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\agents\agent.py", line 890, in _take_next_step
observation = tool.run(
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\tools\base.py", line 320, in run
raise e
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\tools\base.py", line 292, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\tools\base.py", line 444, in _run
self.func(
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\chains\base.py", line 451, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\chains\base.py", line 258, in __call__
raise e
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\chains\base.py", line 252, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\chains\sql_database\base.py", line 186, in _call
raise exc
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\chains\sql_database\base.py", line 131, in _call
result = self.database.run(sql_cmd)
File "C:\Users\xl\AppData\Roaming\Python\Python39\site-packages\langchain\utilities\sql_database.py", line 390, in run
cursor = connection.execute(text(command))
File "C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\engine\base.py", line 1306, in execute
return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File "C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\sql\elements.py", line 332, in _execute_on_connection
return connection._execute_clauseelement(
File "C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\engine\base.py", line 1498, in _execute_clauseelement
ret = self._execute_context(
File "C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\engine\base.py", line 1862, in _execute_context
self._handle_dbapi_exception(
File "C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\engine\base.py", line 2047, in _handle_dbapi_exception
util.raise_(exc_info[1], with_traceback=exc_info[2])
File "C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\util\compat.py", line 208, in raise_
raise exception
File "C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\engine\base.py", line 1819, in _execute_context
self.dialect.do_execute(
File "C:\ProgramData\Anaconda3\lib\site-packages\sqlalchemy\engine\default.py", line 732, in do_execute
cursor.execute(statement, parameters)
sqlite3.Warning: You can only execute one statement at a time.
Process finished with exit code 1
``` | SQLDatabaseChain: sqlite3.Warning: You can only execute one statement at a time. | https://api.github.com/repos/langchain-ai/langchain/issues/9627/comments | 3 | 2023-08-22T20:04:19Z | 2023-11-29T16:07:20Z | https://github.com/langchain-ai/langchain/issues/9627 | 1,862,121,036 | 9,627 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
The current documentation recommends running the following SQL function to create a table and function for using a supabase postgres database as a vector store.
```postgres
-- Enable the pgvector extension to work with embedding vectors
create extension vector;
-- Create a table to store your documents
create table documents (
id bigserial primary key,
content text, -- corresponds to Document.pageContent
metadata jsonb, -- corresponds to Document.metadata
embedding vector(1536) -- 1536 works for OpenAI embeddings, change if needed
);
-- Create a function to search for documents
create function match_documents (
query_embedding vector(1536),
match_count int default null,
filter jsonb DEFAULT '{}'
) returns table (
id bigint,
content text,
metadata jsonb,
similarity float
)
language plpgsql
as $$
#variable_conflict use_column
begin
return query
select
id,
content,
metadata,
1 - (documents.embedding <=> query_embedding) as similarity
from documents
where metadata @> filter
order by documents.embedding <=> query_embedding
limit match_count;
end;
$$;
```
This creates a table with a column named id of type `bigint`
However when using the `SupabaseVectorStore` class and trying to run methods such as `add_documents` or `add_texts`, an attempt is made to insert `Document` objects into the database. However it uses the uuid generated as the id which is incompatible with the `bigint` column type resulting in errors such as
```bash
postgrest.exceptions.APIError: {'code': '22P02', 'details': None, 'hint': None, 'message': 'invalid input syntax for type bigint: "dbf8aa60-8295-450c-83bc-7395e2836a6a"'}
```
### Idea or request for content:
Instead the documentation should recommend using the column type of uuid which is natively supported within supabase | DOC: Langchain Supabase Vectorstore ID Incompatability | https://api.github.com/repos/langchain-ai/langchain/issues/9624/comments | 6 | 2023-08-22T19:07:33Z | 2023-12-04T20:18:50Z | https://github.com/langchain-ai/langchain/issues/9624 | 1,862,041,049 | 9,624 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.193
python 3.8.10
### Who can help?
@agola11
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [x] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Im creating multiple vector stores using Chroma DB and saved these inside subdirectories inside a directory DB/ , all occurring inside of a docker container. I'm using persist method as you detail in the docs.
2. save and load work as expected, however, after container restart (an expected operation during development), I got the following error related to parquet file in the DB subdirectory: Invalid Input Error: File '/app/DB/SIC/chroma-embeddings.parquet' too small to be a Parquet file. During interaction with Chroma DB app only reads data not adds or overwrites data.
it seems that the reading operation overwrites the chroma-embeddings.parquet file.
### Expected behavior
I expect that after the container restarts the DB could be loaded and not require to be rebuilt. | duckdb too small to be a Parquet file | https://api.github.com/repos/langchain-ai/langchain/issues/9616/comments | 3 | 2023-08-22T15:12:51Z | 2024-02-08T16:26:11Z | https://github.com/langchain-ai/langchain/issues/9616 | 1,861,679,027 | 9,616 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain-0.0.270
Croma Db Wrapper not considering the **kwargs for querying the collection
Method: similarity_search_with_score
Code :
results = self.__query_collection(
query_embeddings=[query_embedding], n_results=k, where=filter
)
the query collection doesn't uses **kwargs which will be helpful to query the document that contains "string"
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
langchain-0.0.270
Croma Db Wrapper not considering the **kwargs for querying the collection
Method: similarity_search_with_score
Code :
results = self.__query_collection(
query_embeddings=[query_embedding], n_results=k, where=filter
)
the query collection doesn't uses **kwargs which will be helpful to query the document that contains "string"
### Expected behavior
langchain-0.0.270
Croma Db Wrapper not considering the **kwargs for querying the collection
Method: similarity_search_with_score
Code :
results = self.__query_collection(
query_embeddings=[query_embedding], n_results=k, where=filter
)
the query collection doesn't uses **kwargs which will be helpful to query the document that contains "string" | Croma Db Wrapper not considering the **kwargs for quering the collection | https://api.github.com/repos/langchain-ai/langchain/issues/9611/comments | 2 | 2023-08-22T14:12:14Z | 2023-11-29T16:07:30Z | https://github.com/langchain-ai/langchain/issues/9611 | 1,861,554,770 | 9,611 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version: 0.0.270
python version: 3.11.0
os: Ubuntu 20.04.6 LTS
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
in a python file, there is only one line of code like following:
`from langchain.document_loaders import DirectoryLoader`
when run it, it generate following errorss:
```
Traceback (most recent call last):
File "/home/tq/code/langchain/python_311/project/01_test.py", line 1, in <module>
from langchain.document_loaders import DirectoryLoader
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/langchain/agents/__init__.py", line 31, in <module>
from langchain.agents.agent import (
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/langchain/agents/agent.py", line 14, in <module>
from langchain.agents.agent_iterator import AgentExecutorIterator
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/langchain/agents/agent_iterator.py", line 30, in <module>
from langchain.tools import BaseTool
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/langchain/tools/__init__.py", line 41, in <module>
from langchain.tools.gmail import (
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/langchain/tools/gmail/__init__.py", line 3, in <module>
from langchain.tools.gmail.create_draft import GmailCreateDraft
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/langchain/tools/gmail/create_draft.py", line 7, in <module>
from langchain.tools.gmail.base import GmailBaseTool
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/langchain/tools/gmail/base.py", line 16, in <module>
from googleapiclient.discovery import Resource
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/googleapiclient/discovery.py", line 57, in <module>
from googleapiclient import _auth, mimeparse
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/googleapiclient/_auth.py", line 34, in <module>
import oauth2client.client
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/oauth2client/client.py", line 47, in <module>
from oauth2client import crypt
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/oauth2client/crypt.py", line 55, in <module>
from oauth2client import _pycrypto_crypt
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/oauth2client/_pycrypto_crypt.py", line 17, in <module>
from Crypto.PublicKey import RSA
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/Crypto/PublicKey/__init__.py", line 29, in <module>
from Crypto.Util.asn1 import (DerSequence, DerInteger, DerBitString,
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/Crypto/Util/asn1.py", line 33, in <module>
from Crypto.Util.number import long_to_bytes, bytes_to_long
File "/home/tq/anaconda3/envs/py311/lib/python3.11/site-packages/Crypto/Util/number.py", line 398
s = pack('>I', n & 0xffffffffL) + s
^
SyntaxError: invalid hexadecimal literal
```
please help, thanks!
### Expected behavior
import DirectoryLoader should be work. | import DirectoryLoader generate error | https://api.github.com/repos/langchain-ai/langchain/issues/9609/comments | 3 | 2023-08-22T13:19:30Z | 2023-08-22T17:00:35Z | https://github.com/langchain-ai/langchain/issues/9609 | 1,861,452,000 | 9,609 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.249
### Who can help?
@hwchase17 @agola11 @eyurtsev
### Information
- [X] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.chains import ConversationalRetrievalChain
qa = ConversationalRetrievalChain.from_llm(with some values such as vector store, memory etc)
now i want to serialize qa, or store it, the key is to be able to store the qa and pass it wherever i want, ideally store it in one endpoint and pass to another endpoint
but i get errors, about not being able to serialize, which makes it difficult
i have tried flask session, json.dumps, pickle, cloud pickle, tried to store in a database etc, still get an error
### Expected behavior
I should be able to serialize or save the QA, so i can pass it from one endpoint to another, to keep user in session, and isolate a user's QA from others | TypeError: Object of type ConversationalRetrievalChain is not JSON serializable | https://api.github.com/repos/langchain-ai/langchain/issues/9607/comments | 4 | 2023-08-22T12:48:52Z | 2024-02-07T16:27:08Z | https://github.com/langchain-ai/langchain/issues/9607 | 1,861,394,996 | 9,607 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Greetings everyone,
I'm interested in directly saving the chromaDB vector store to an S3 bucket. Is there a way to accomplish this?
### Motivation
I want to run my LLM directly in the AWS cloud, but firstly I need to deal with implementation of efficient loading and saving of my db.
### Your contribution
I already implemented function to load data from s3 and creating the vector store.
```
import boto3
from langchain.document_loaders import S3DirectoryLoader
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
# Initialize the S3 client
s3 = boto3.client('s3')
# Specify the S3 bucket and directory path
bucket_name = 'bucket_name'
directory_key = 's3_path'
# List objects with a delimiter to get only common prefixes (directories)
response = s3.list_objects_v2(Bucket=bucket_name, Prefix=directory_path, Delimiter='/')
# Extract the common prefixes (directories) from the response
common_prefixes = response.get('CommonPrefixes', [])
# Print the directory names
for prefix in common_prefixes:
print(prefix['Prefix'])
def create_chromadb_from_s3():
# Load data from s3
docs = []
for key in s3.list_objects_v2(Bucket=bucket_name, Prefix=directory_path, Delimiter='/').get('CommonPrefixes', []):
loader = S3DirectoryLoader(bucket_name, key['Prefix'])
docs.extend(loader.load())
# Split documents
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=50)
texts = text_splitter.split_documents(docs)
embeddings = OpenAIEmbeddings(openai_api_key=open_ai_secret)
db = Chroma.from_documents(
texts, embedding=embeddings
)
return db
```
Unfortunately there is no way to set "persist_dir" as connection to the s3 bucket. Any idea how we can implement it? | Upload ChromDB vectordb from s3 | https://api.github.com/repos/langchain-ai/langchain/issues/9606/comments | 8 | 2023-08-22T12:43:32Z | 2024-02-14T16:11:38Z | https://github.com/langchain-ai/langchain/issues/9606 | 1,861,385,729 | 9,606 |
[
"hwchase17",
"langchain"
]
| ### Describe the feature or improvement you're requesting
When making a direct call to OpenAI's POST /v1/chat/completions endpoint, we receive valuable headers that provide information about the rate limiting, including:
```
x-ratelimit-limit-requests: 3500
x-ratelimit-remaining-requests: 3499
x-ratelimit-reset-requests: 17ms
```
However, when using the `ChatOpenAI` and `OpenAIEmbeddings` APIs, these rate limit headers are not returned in the response or as part of the `OpenAICallbackHandler` object (as total_tokens or total_cost values). This makes it challenging to track and manage rate limits programmatically.
Can we get rate headers values using `ChatOpenAI` and `OpenAIEmbeddings`?
It would be nice if we could get access to these headers, or have them returned in a helpful format along with the response.
### Motivation
Having access to rate limit information is crucial for developers to effectively manage their API usage, especially when using these APIs in production environments. This addition would greatly enhance the usability and monitoring capabilities of the OpenAI API.
| Expose x-ratelimit-* headers from OpenAI API with Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/9601/comments | 3 | 2023-08-22T11:04:51Z | 2024-05-14T16:06:16Z | https://github.com/langchain-ai/langchain/issues/9601 | 1,861,223,410 | 9,601 |
[
"hwchase17",
"langchain"
]
| ### System Info
python==3.9
langchain==0.0.246
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am trying to use GPT4ALL model with MultiRouterChain and it is throwing some weird error.
## Utility Function:
```
def create_transform_func(remove_index):
def my_transform_func(inputs: dict):
return transform_func(inputs, remove_index)
return my_transform_func
def transform_func(inputs: dict, remove_index = 5) -> dict:
text = inputs['input'].strip()
accumulate = ""
for i, s in enumerate(text.split(' ')):
i = i + 1
if i % remove_index != 0:
accumulate += f"{s} "
return {"input": accumulate.strip()}
```
```
"""Callback Handler that prints to std out."""
from typing import Any, Dict, List, Optional, Union
from langchain.callbacks.base import BaseCallbackHandler
from langchain.schema import AgentAction, AgentFinish
from langchain.schema import LLMResult
from pathlib import Path
from datetime import datetime
import re
class FileCallbackHandler(BaseCallbackHandler):
"""Callback Handler that prints to std out."""
def __init__(self,
path: Path,
print_prompts: bool=False,
print_class: bool=False,
title: Optional[str] = "Conversation Log",
color: Optional[str] = None
) -> None:
"""Initialize callback handler."""
self.color = color
self.print_prompts = print_prompts
self.print_class = print_class
self.path = path
self.file_handle = open(path, 'w')
self.title = title
self.output_keys = []
self.output_values = []
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> None:
"""Print out the prompts."""
if self.print_prompts:
self.file_handle.write(f"=============== PROMPTS ==================\n")
for prompt in prompts:
self.file_handle.write(f"{prompt}\n")
self.file_handle.write("\n")
self.file_handle.flush()
self.file_handle.write(f"============ END PROMPTS =================\n\n")
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
"""Do nothing."""
self.file_handle.write(f"=============== LLM END ==================\n")
pass
def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
"""Do nothing."""
pass
def on_llm_error(
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> None:
"""Do nothing."""
pass
def on_chain_start(
self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any
) -> None:
"""Print out that we are entering a chain."""
if self.print_class:
self.file_handle.write(f"================ CLASS ===================\n")
class_name = serialized["name"]
self.file_handle.write(f">>> class: {class_name}\n")
self.file_handle.write(f"============== END CLASS =================\n\n")
self.file_handle.flush()
self.output_keys.append(list(inputs.keys()))
self.output_values.append(list(inputs.values()))
def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> None:
"""Print out that we finished a chain."""
# print("\n\033[1m> Finished chain.\033[0m")
# self.file_handle.close()
self.file_handle.write(f"================ OUTPUT ==================\n")
keys = []
values = []
for k, v in outputs.items():
keys.append(k)
values.append(v)
self.file_handle.write(f"{k}:\n")
self.file_handle.write(f"{v}\n\n")
self.output_keys.append(keys)
self.output_values.append(values)
self.file_handle.write(f"================ OUTPUT ==================\n")
self.file_handle.flush()
def on_chain_error(
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> None:
"""Do nothing."""
pass
def on_tool_start(
self,
serialized: Dict[str, Any],
input_str: str,
**kwargs: Any,
) -> None:
"""Do nothing."""
self.file_handle.write(datetime.today().strftime('%Y-%m-%d'))
self.file_handle.write("\n========")
self.file_handle.flush()
def on_agent_action(
self, action: AgentAction, color: Optional[str] = None, **kwargs: Any
) -> Any:
"""Run on agent action."""
self.file_handle.write(f">>> action: {action.log}")
def on_tool_end(
self,
output: str,
color: Optional[str] = None,
observation_prefix: Optional[str] = None,
llm_prefix: Optional[str] = None,
**kwargs: Any,
) -> None:
"""If not the final action, print out observation."""
if observation_prefix is not None:
self.file_handle.write(f"\n{observation_prefix}")
self.file_handle.write(output)
if llm_prefix is not None:
self.file_handle.write(f"\n{llm_prefix}")
self.file_handle.flush()
def on_tool_error(
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> None:
"""Do nothing."""
pass
def on_text(self, text: str, color: Optional[str] = None, end: str = "", **kwargs: Any) -> None:
"""Run when agent ends."""
self.file_handle.write(f"================ TEXT ===================\n")
self.file_handle.write(f"{text}\n")
self.file_handle.flush()
self.file_handle.write(f"============== END TEXT =================\n\n")
agent = extract_agent(text)
if agent != "":
self.output_keys.append([agent])
self.output_values.append([text])
def on_agent_finish(
self, finish: AgentFinish, color: Optional[str] = None, **kwargs: Any
) -> None:
"""Run on agent end."""
self.file_handle.write(f"{finish.log}\n")
self.file_handle.flush()
self.file_handle.close()
def create_html(self):
table: str = """
<table class="table table-striped">
<tr>
<th>
Agent
</th>
<th>
Type
</th>
<th>
Output
</th>
</tr>
"""
dedup_hash = set()
for keys, values in zip(self.output_keys, self.output_values):
for key, val in zip(keys, values):
if val not in dedup_hash:
dedup_hash.add(val)
else:
continue
agent = extract_agent(val)
table += (f"""
<tr>
<td>
</td>
<td>
<pre>{key}</pre>
</td>
<td>
<pre>{val}</pre>
</td>
</tr>
""" if agent == "" else f"""
<tr>
<td>{agent}</td>
<td></td>
<td></td>
</tr>
""")
table += "</table>"
target_file = f"{self.path.stem}.html"
with open(target_file, "w", encoding='utf-8') as f:
f.write(f"""
<html>
<head>
<meta charset="UTF-8" />
<link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" rel="stylesheet" integrity="sha384-9ndCyUaIbzAi2FUVXJi0CjmCapSmO7SnpJef0486qhLnuZ2cdeRhO02iuK6FUUVM" crossorigin="anonymous">
<script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.bundle.min.js" integrity="sha384-geWF76RCwLtnZ8qwWowPQNguL3RmwHVBC9FhGdlKrxdiJJigb/j/68SIy3Te4Bkz" crossorigin="anonymous"></script>
<style>
pre {{
white-space: pre-wrap;
}}
</style>
</head>
<body>
<div class="container-fluid">
<h1>{self.title}</h1>
<h2>{generate_timestamp()}</h2>
{table}
</div>
</body>
</html>
""")
print(f"Saved chat content to {target_file}")
def generate_timestamp():
# Get the current date and time
now = datetime.now()
# Get the weekday, day, month, year, and time in English
weekday = now.strftime("%A")
day = now.strftime("%d")
month = now.strftime("%B")
year = now.strftime("%Y")
time = now.strftime("%H:%M:%S")
# Create the timestamp string
timestamp = f"{weekday}, {day} {month} {year} {time}"
return timestamp
def extract_input(text):
return re.sub(r".+?'input':\s*'(.+)'}", r"\1", text)
def extract_agent(text):
regex = r"^([a-z\s]+)\:.+"
match = re.search(regex, text)
if match is None:
return ""
return re.sub(regex, r"\1", text)
```
## Driver Code
```
from langchain.chains.router import MultiRouteChain, RouterChain
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationChain
from langchain.chains.llm import LLMChain
from langchain.prompts import PromptTemplate
from langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParser
from langchain.chains.router.multi_prompt_prompt import MULTI_PROMPT_ROUTER_TEMPLATE
from langchain.chains import SimpleSequentialChain, TransformChain
from prompt_toolkit import HTML, prompt
import langchain.callbacks
from replace_function import create_transform_func
from langchain.callbacks import StdOutCallbackHandler
from FileCallbackHandler import FileCallbackHandler
from pathlib import Path
from typing import Mapping, List, Union
import openai, os
from langchain.llms import GPT4All, LlamaCpp, OpenAI, AzureOpenAI, SelfHostedHuggingFaceLLM, HuggingFacePipeline
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
file_ballback_handler = FileCallbackHandler(Path('router_chain.txt'), print_prompts=True)
class GPT4_Config():
current_path = os.path.dirname(os.path.realpath(__file__))
llm = GPT4All(
model=os.path.abspath(os.path.join(current_path, r"../../llms/gpt4all/ggml-gpt4all-j-v1.3-groovy.bin")),
# n_ctx=1000,
verbose=True,
backend='gptj',
callbacks=[StreamingStdOutCallbackHandler()]
)
cfg = GPT4_Config()
class PromptFactory():
developer_template = """You are a very smart Python programmer. \
You provide answers for algorithmic and computer problems in Python. \
You explain the code in a detailed manner. \
Here is a question:
{input}
Answer:"""
python_test_developer_template = """You are a very smart Python programmer who writes unit tests using pytest. \
You provide test functions written in pytest with asserts. \
You explain the code in a detailed manner. \
Here is a input on which you create a test:
{input}
Answer:"""
kotlin_developer_template = """You are a very smart Kotlin programmer. \
You provide answers for algorithmic and computer science problems in Kotlin. \
You explain the code in a detailed manner. \
Here is a question:
{input}
Answer:"""
kotlin_test_developer_template = """You are a very smart Kotlin programmer who writes unit tests using JUnit 5. \
You provide test functions written in JUnit 5 with JUnit asserts. \
You explain the code in a detailed manner. \
Here is a input on which you create a test:
{input}
Answer:"""
poet_template = """You are a poet who replies to creative requests with poems in English. \
You provide answers which are poems in the style of Lord Byron or Shakespeare. \
Here is a question:
{input}
Answer:"""
wiki_template = """You are a Wikipedia expert. \
You answer common knowledge questions based on Wikipedia knowledge. \
Your explanations are detailed and in plain English.
Here is a question:
{input}
Answer:"""
image_creator_template = """You create a creator of images. \
You provide graphic representations of answers using SVG images.
Here is a question:
{input}
Answer:"""
legal_expert_template = """You are a UK or US legal expert. \
You explain questions related to the UK or US legal systems in an accessible language \
with a good number of examples.
Here is a question:
{input}
Answer:"""
word_filler = """Your job is to fill the words in a sentence in which words seems to be missing.
Here is the input:
{input}
Answer:"""
python_programmer = 'python programmer'
kotlin_programmer = 'kotlin programmer'
programmer_test_dict = {
python_programmer: python_test_developer_template,
kotlin_programmer: kotlin_test_developer_template
}
word_filler_name = 'word filler'
prompt_infos = [
{
'name': python_programmer,
'description': 'Good for questions about coding and algorithms in Python',
'prompt_template': developer_template
},
{
'name': 'python tester',
'description': 'Good for for generating Python tests from existing Python code',
'prompt_template': python_test_developer_template
},
{
'name': kotlin_programmer,
'description': 'Good for questions about coding and algorithms in Kotlin',
'prompt_template': kotlin_developer_template
},
{
'name': 'kotlin tester',
'description': 'Good for for generating Kotlin tests from existing Kotlin code',
'prompt_template': kotlin_test_developer_template
},
{
'name': 'poet',
'description': 'Good for generating poems for creatinve questions',
'prompt_template': poet_template
},
{
'name': 'wikipedia expert',
'description': 'Good for answering questions about general knwoledge',
'prompt_template': wiki_template
},
{
'name': 'graphical artist',
'description': 'Good for answering questions which require an image output',
'prompt_template': image_creator_template
},
{
'name': 'legal expert',
'description': 'Good for answering questions which are related to UK or US law',
'prompt_template': legal_expert_template
},
{
'name': word_filler_name,
'description': 'Good at filling words in sentences with missing words',
'prompt_template': word_filler
}
]
class MyMultiPromptChain(MultiRouteChain):
"""A multi-route chain that uses an LLM router chain to choose amongst prompts."""
router_chain: RouterChain
"""Chain for deciding a destination chain and the input to it."""
destination_chains: Mapping[str, Union[LLMChain, SimpleSequentialChain]]
"""Map of name to candidate chains that inputs can be routed to."""
default_chain: LLMChain
"""Default chain to use when router doesn't map input to one of the destinations."""
@property
def output_keys(self) -> List[str]:
return ["text"]
def generate_destination_chains():
"""
Creates a list of LLM chains with different prompt templates.
Note that some of the chains are sequential chains which are supposed to generate unit tests.
"""
prompt_factory = PromptFactory()
destination_chains = {}
for p_info in prompt_factory.prompt_infos:
print("="*70)
name = p_info['name']
print(f"======= Prompt Name =======: \n{name}")
prompt_template = p_info['prompt_template']
print(f"======= Prompt Template =======: \n{prompt_template}")
# callbacks = [StdOutCallbackHandler]
chain = LLMChain(
llm=cfg.llm,
prompt=PromptTemplate(template=prompt_template, input_variables=['input']),
output_key='text',
callbacks=[file_ballback_handler],
verbose=True
)
if name not in prompt_factory.programmer_test_dict.keys() and name != prompt_factory.word_filler_name:
print("Addition using 1st case")
destination_chains[name] = chain
elif name == prompt_factory.word_filler_name:
print("Addition using 2nd case")
transform_chain = TransformChain(
input_variables=["input"], output_variables=["input"], transform=create_transform_func(3), callbacks=[file_ballback_handler]
)
destination_chains[name] = SimpleSequentialChain(
chains=[transform_chain, chain], verbose=True, output_key='text', callbacks=[file_ballback_handler]
)
else:
print("Addition using 3rd case")
# Normal chain is used to generate code
# Additional chain to generate unit tests
template = prompt_factory.programmer_test_dict[name]
prompt_template = PromptTemplate(input_variables=["input"], template=template)
test_chain = LLMChain(llm=cfg.llm, prompt=prompt_template, output_key='text', callbacks=[file_ballback_handler])
destination_chains[name] = SimpleSequentialChain(
chains=[chain, test_chain], verbose=True, output_key='text', callbacks=[file_ballback_handler]
)
print("="*70)
print("\n\n\n")
print("============= Destination Chains =============")
pprint(destination_chains)
default_chain = ConversationChain(llm=cfg.llm, output_key="text")
return prompt_factory.prompt_infos, destination_chains, default_chain
def generate_router_chain(prompt_infos, destination_chains, default_chain):
"""
Generats the router chains from the prompt infos.
:param prompt_infos The prompt informations generated above.
:param destination_chains The LLM chains with different prompt templates
:param default_chain A default chain
"""
destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]
print("====================== DESTINATIONS ======================")
print(destinations)
destinations_str = '\n'.join(destinations)
print("====================== DESTINATIONS STRINGS ======================")
print(destinations_str)
router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations_str)
print("====================== ROUTER TEMPLATE ======================")
print(router_template)
router_prompt = PromptTemplate(
template=router_template,
input_variables=['input'],
output_parser=RouterOutputParser()
)
print("====================== PROMPT TEMPLATE ======================")
print(router_prompt)
router_chain = LLMRouterChain.from_llm(cfg.llm, router_prompt)
multi_route_chain = MyMultiPromptChain(
router_chain=router_chain,
destination_chains=destination_chains,
default_chain=default_chain,
verbose=True,
callbacks=[file_ballback_handler]
)
print("====================== MULTI ROUTER CHAIN ======================")
print(multi_route_chain)
return multi_route_chain
if __name__ == "__main__":
prompt_infos, destination_chains, default_chain = generate_destination_chains()
chain = generate_router_chain(prompt_infos, destination_chains, default_chain)
with open('conversation.log', 'w') as f:
while True:
question = prompt(
HTML("<b>Type <u>Your question</u></b> ('q' to exit, 's' to save to html file): ")
)
if question == 'q':
break
if question in ['s', 'w'] :
file_ballback_handler.create_html()
continue
result = chain.run(question)
f.write(f"Q: {question}\n\n")
f.write(f"A: {result}")
f.write('\n\n ====================================================================== \n\n')
print(result)
print()
```
## Error:
```
> Entering new MyMultiPromptChain chain...
/home/gpu-titan/anaconda3/envs/Seer/lib/python3.9/site-packages/langchain/chains/llm.py:275: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.
warnings.warn(
{
"destination": "UK",
"next_inputs": "['Inheritance Tax', 'Legal System']"}UK: {'input': "['Inheritance Tax', 'Legal System']"}Traceback (most recent call last):
File "/home/gpu-titan/Desktop/Ramish/Seer/seer_main/complex_chain/complex_chain.py", line 360, in <module>
result = chain.run(question)
File "/home/gpu-titan/anaconda3/envs/Seer/lib/python3.9/site-packages/langchain/chains/base.py", line 451, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/home/gpu-titan/anaconda3/envs/Seer/lib/python3.9/site-packages/langchain/chains/base.py", line 258, in __call__
raise e
File "/home/gpu-titan/anaconda3/envs/Seer/lib/python3.9/site-packages/langchain/chains/base.py", line 252, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/gpu-titan/anaconda3/envs/Seer/lib/python3.9/site-packages/langchain/chains/router/base.py", line 106, in _call
raise ValueError(
ValueError: Received invalid destination chain name 'UK'
```
### Expected behavior
Prompt is correctly routed to correct chain and returns the correct answer.
## Expected Output:
```
> Finished chain.
{'question': 'What are the main differences between the UK and US legal systems in terms of the inheritance tax?', 'text': ' The
main difference between the UK and US legal systems regarding inheritance taxes lies in their respective approaches to
calculating, taxing, and paying them out. In both countries, there may be variations depending on factors such as wealth or
family structure. However, some key differences include:\n1) Inheritance Tax Rates - While the rates of inheritance tax are similar
between the two jurisdictions (7% for assets over £325k in England/Wales; 7.5% for assets over $675k in Scotland), there may be
variations depending on factors such as wealth or family structure, which can affect how much is owed and when it should be
paid out to beneficiaries.\n2) Inheritance Tax Rules - The UK has a more complex inheritance tax system than the US, with
different rules governing who qualifies for an exemption from paying inheritance tax (e.g., married couples vs unmarried
individuals). In addition, there may be variations in how assets are taxed and when they should be transferred to beneficiaries
before death.\n3) Inheritance Tax Planning - Both countries offer various strategies such as trusts or wills that can help reduce
the amount of inheritance tax owed by transferring wealth out-of-estate at a lower rate than would otherwise apply, but with
different rules governing how these plans are set up'}
``` | MultiRouteChain not working as expected | https://api.github.com/repos/langchain-ai/langchain/issues/9600/comments | 4 | 2023-08-22T10:31:29Z | 2024-06-15T23:43:20Z | https://github.com/langchain-ai/langchain/issues/9600 | 1,861,170,095 | 9,600 |
[
"hwchase17",
"langchain"
]
| ### System Info
(.venv) yale@LAPTOP-MATEBOOK:~/work/llm-app$ python --version
Python 3.10.12
(.venv) yale@LAPTOP-MATEBOOK:~/work/llm-app$ pip list|grep langchain
langchain 0.0.262
langchainplus-sdk 0.0.20
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Update the official example of Tool https://python.langchain.com/docs/modules/agents/tools/custom_tools with a tool without argument:
`
class NoParameterInput(BaseModel):
pass
tools = [
Tool(
name="GetToday",
func=lambda: "2023-09-30",
description="Get today's date",
args_schema=NoParameterInput
),
`
When running, a error was triggered:
`
Traceback (most recent call last):
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/gradio/routes.py", line 437, in run_predict
output = await app.get_blocks().process_api(
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/gradio/blocks.py", line 1352, in process_api
result = await self.call_function(
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/gradio/blocks.py", line 1077, in call_function
prediction = await anyio.to_thread.run_sync(
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/home/yale/work/llm-app/local_tests/gradio/langchain_chatbot.py", line 356, in qa_answer_question
qa_answer = chain(question)
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 282, in __call__
raise e
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 276, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/langchain/chains/router/base.py", line 100, in _call
return self.destination_chains[route.destination](
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 282, in __call__
raise e
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 276, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1036, in _call
next_step_output = self._take_next_step(
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 891, in _take_next_step
observation = tool.run(
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/langchain/tools/base.py", line 340, in run
raise e
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/langchain/tools/base.py", line 331, in run
tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
File "/home/yale/work/llm-app/.venv/lib/python3.10/site-packages/langchain/tools/base.py", line 488, in _to_args_and_kwargs
raise ToolException(
langchain.tools.base.ToolException: Too many arguments to single-input tool GetToday. Args: []
`
The source of _to_args_and_kwargs():
`
def _to_args_and_kwargs(self, tool_input: Union[str, Dict]) -> Tuple[Tuple, Dict]:
"""Convert tool input to pydantic model."""
args, kwargs = super()._to_args_and_kwargs(tool_input)
# For backwards compatibility. The tool must be run with a single input
all_args = list(args) + list(kwargs.values())
if len(all_args) != 1:
raise ToolException(
f"Too many arguments to single-input tool {self.name}."
f" Args: {all_args}"
)
return tuple(all_args), {}
`
Which does not accept tools without argument.
### Expected behavior
Calls GetToday() successfully. | _to_args_and_kwargs() failed to handle tool definition with no arguments | https://api.github.com/repos/langchain-ai/langchain/issues/9599/comments | 2 | 2023-08-22T10:20:56Z | 2023-11-28T16:07:40Z | https://github.com/langchain-ai/langchain/issues/9599 | 1,861,150,371 | 9,599 |
[
"hwchase17",
"langchain"
]
| Hi Team,
Is there schedule to support plugins with auth?
thanks. | Schedule for supporting plugins with auth | https://api.github.com/repos/langchain-ai/langchain/issues/9597/comments | 1 | 2023-08-22T09:49:48Z | 2023-11-28T16:07:45Z | https://github.com/langchain-ai/langchain/issues/9597 | 1,861,082,845 | 9,597 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version 0.0.270
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create multiple instances of ChatMessageHistory and add history. They shared the list.
### Expected behavior
`messages: List[BaseMessage] = []
`
to
`messages: List[BaseMessage] = Field(default_factory=list)`
Fix [here](https://github.com/langchain-ai/langchain/pull/9594) | List in ChatMessageHistory is not correctly initialized | https://api.github.com/repos/langchain-ai/langchain/issues/9595/comments | 1 | 2023-08-22T09:38:59Z | 2023-08-25T12:08:49Z | https://github.com/langchain-ai/langchain/issues/9595 | 1,861,060,406 | 9,595 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Add option to specify dtype (e.g. `float16`) in `langchain.llms.VLLM()` constructor,
### Motivation
Currently, `vllm` library provides an argument to use `float16` dtype, but langchain doesn't.
There is no way to use langchain VLLM with a GPU < 8.0 compute capability.


### Your contribution
I can contribute with guideline from langchain team | Allow specifying dtype in `langchain.llms.VLLM` | https://api.github.com/repos/langchain-ai/langchain/issues/9593/comments | 1 | 2023-08-22T09:33:12Z | 2023-08-23T03:30:28Z | https://github.com/langchain-ai/langchain/issues/9593 | 1,861,049,462 | 9,593 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain Version: 0.0.260
Python Version: 3.11.4
Operating System: ubuntu-22.04
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
After updating from version 0.0.256 to 0.0.260, I noticed that the TransformChain subclass which used to accept additional arguments (memory) no longer works. This regression might be related to changes made in [Pull Request #8762](https://github.com/langchain-ai/langchain/pull/8762).
### Relevant Code:
Here's a snippet of the MySubclassedTransformChain:
```python
class MySubclassedTransformChain(TransformChain):
memory: BaseChatMemory
def _call(
self,
inputs: Dict[str, str],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, str]:
return self.transform(inputs, memory=self.memory)
async def _acall(
self,
inputs: Dict[str, Any],
run_manager: Optional[AsyncCallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
if self.atransform is not None:
return await self.atransform(inputs, memory=self.memory)
else:
self._log_once(
"TransformChain's atransform is not provided, falling"
" back to synchronous transform"
)
return self.transform(inputs, memory=self.memory)
```
Usage:
```
transform_chain = MySubclassedTransformChain(input_variables=["input"], output_variables=["output"], transform=transform_func, memory=memory)
llm_chain = LLMChain(llm=llm, prompt=prompt, memory=memory)
sequential_chain = SimpleSequentialChain(chains=[transform_chain, llm_chain])
sequential_chain.run(message)
```
Exception Stack Trace:
```
app-1 | ERROR: Exception in ASGI application
app-1 | Traceback (most recent call last):
app-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
app-1 | result = await app( # type: ignore[func-returns-value]
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
app-1 | return await self.app(scope, receive, send)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 289, in __call__
app-1 | await super().__call__(scope, receive, send)
app-1 | File "/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/starlette.py", line 348, in _sentry_patched_asgi_app
app-1 | return await middleware(scope, receive, send)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/asgi.py", line 141, in _run_asgi3
app-1 | return await self._run_app(scope, lambda: self.app(scope, receive, send))
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/asgi.py", line 190, in _run_app
app-1 | raise exc from None
app-1 | File "/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/asgi.py", line 185, in _run_app
app-1 | return await callback()
app-1 | ^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 122, in __call__
app-1 | await self.middleware_stack(scope, receive, send)
app-1 | File "/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/starlette.py", line 143, in _create_span_call
app-1 | return await old_call(app, scope, new_receive, new_send, **kwargs)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in __call__
app-1 | raise exc
app-1 | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in __call__
app-1 | await self.app(scope, receive, _send)
app-1 | File "/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/starlette.py", line 242, in _sentry_exceptionmiddleware_call
app-1 | await old_call(self, scope, receive, send)
app-1 | File "/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/starlette.py", line 143, in _create_span_call
app-1 | return await old_call(app, scope, new_receive, new_send, **kwargs)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
app-1 | raise exc
app-1 | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
app-1 | await self.app(scope, receive, sender)
app-1 | File "/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/starlette.py", line 143, in _create_span_call
app-1 | return await old_call(app, scope, new_receive, new_send, **kwargs)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 20, in __call__
app-1 | raise e
app-1 | File "/usr/local/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 17, in __call__
app-1 | await self.app(scope, receive, send)
app-1 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__
app-1 | await route.handle(scope, receive, send)
app-1 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
app-1 | await self.app(scope, receive, send)
app-1 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 66, in app
app-1 | response = await func(request)
app-1 | ^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/fastapi.py", line 131, in _sentry_app
app-1 | return await old_app(*args, **kwargs)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 273, in app
app-1 | raw_response = await run_endpoint_function(
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 190, in run_endpoint_function
app-1 | return await dependant.call(**values)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/app/app/main.py", line 47, in post_message
app-1 | result = await handle_message(message)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/app/app/message_chain.py", line 20, in handle_message
app-1 | assistant_result = assistant_chain.run(extraction_result, memory=manager.memory)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/app/app/assistant_chain.py", line 63, in run
app-1 | response = sequential_chain.run(message)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 475, in run
app-1 | return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 282, in __call__
app-1 | raise e
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 276, in __call__
app-1 | self._call(inputs, run_manager=run_manager)
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/sequential.py", line 180, in _call
app-1 | _input = chain.run(_input, callbacks=_run_manager.get_child(f"step_{i+1}"))
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 475, in run
app-1 | return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 282, in __call__
app-1 | raise e
app-1 | File "/usr/local/lib/python3.11/site-packages/langchain/chains/base.py", line 276, in __call__
app-1 | self._call(inputs, run_manager=run_manager)
app-1 | File "/app/app/langchain_extensions.py", line 14, in _call
app-1 | return self.transform(inputs, memory=self.memory)
app-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
app-1 | TypeError: Runnable.transform() got an unexpected keyword argument 'memory'
```
Transform Function:
```
def transform_func(inputs: dict, memory: BaseChatMemory) -> dict:
text = inputs["text"]
# mutate text with memory
return {"output": text}
```
### Expected behavior
The subclass should handle additional arguments in MySubclassedTransformChain as it did in version 0.0.256. | Regression: Additional arguments in TransformChain subclass no longer work | https://api.github.com/repos/langchain-ai/langchain/issues/9587/comments | 2 | 2023-08-22T07:34:56Z | 2023-11-28T16:07:50Z | https://github.com/langchain-ai/langchain/issues/9587 | 1,860,801,274 | 9,587 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version: 0.0.267
### Who can help?
@hwchase17
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
# Redis vector database initialization
<p>
embeddings = OpenAIEmbeddings(openai_api_key=os.getenv("OPENAI_API_TYPE"), <br/>
deployment=os.getenv("OPENAI_EMBEDDING_MODEL_NAME"), <br/>
client="azure", <br/>
chunk_size=1)<br/>
redis_vector_db = Redis(<br/>
redis_url="redis://localhost:6379", <br/>
index_name="test_1", <br/>
embedding_function=embedding.embed_query<br/>
)
</p>
# Load pdf
loader = PyPDFLoader( file_path)
docs= loader.load_and_split()
redis_vector_db.add_documents(docs, index_name = "test_1")
# Searching --> bug here
docs = redis_vector_db.similarity_search(query = question,k = 4)
-------------------------- This is the end of python script ---------------------------------------
### Expected behavior
# Bug: Even though we have set the index name it will have bug of no such index
File [c:\Users\User\anaconda3\envs\advantech\lib\site-packages\langchain\vectorstores\redis.py:284](file:///C:/Users/User/anaconda3/envs/advantech/lib/site-packages/langchain/vectorstores/redis.py:284), in Redis.similarity_search(self, query, k, **kwargs)
271 def similarity_search(
272 self, query: str, k: int = 4, **kwargs: Any
273 ) -> List[Document]:
274 """
275 Returns the most similar indexed documents to the query text.
276
(...)
282 List[Document]: A list of documents that are most similar to the query text.
283 """
--> 284 docs_and_scores = self.similarity_search_with_score(query, k=k)
285 return [doc for doc, _ in docs_and_scores]
File [c:\Users\User\anaconda3\envs\advantech\lib\site-packages\langchain\vectorstores\redis.py:361](file:///C:/Users/User/anaconda3/envs/advantech/lib/site-packages/langchain/vectorstores/redis.py:361), in Redis.similarity_search_with_score(self, query, k)
354 params_dict: Mapping[str, str] = {
355 "vector": np.array(embedding) # type: ignore
...
904 if isinstance(response, ResponseError):
--> 905 raise response
906 return response
ResponseError: test_1: no such index
# My observation
print(redis_vector_db.index_name)
will show test_1 | Direct initial redis database can't successfully use searching function since index missing | https://api.github.com/repos/langchain-ai/langchain/issues/9585/comments | 6 | 2023-08-22T07:24:40Z | 2024-02-11T16:16:17Z | https://github.com/langchain-ai/langchain/issues/9585 | 1,860,784,286 | 9,585 |
[
"hwchase17",
"langchain"
]
| ### Feature request
```
if self.show_progress_bar:
try:
import tqdm
_iter = tqdm.tqdm(range(0, len(tokens), _chunk_size))
except ImportError:
_iter = range(0, len(tokens), _chunk_size)
```
Current code does not work very well on jupyter notebook, so replace it with `tqdm.auto`
### Motivation
It just does not work very well on jupyter notebook, especially if there's any warning
### Your contribution
I'll make a PR | Replace `tqdm` with `tqdm.auto` | https://api.github.com/repos/langchain-ai/langchain/issues/9582/comments | 2 | 2023-08-22T06:56:51Z | 2023-08-23T00:41:43Z | https://github.com/langchain-ai/langchain/issues/9582 | 1,860,735,901 | 9,582 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version 0.0.270
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
cd lib/experimental
make test
```
============================================================================= short test summary info =============================================================================
FAILED tests/unit_tests/test_smartllm.py::test_all_steps - IndexError: list index out of range
FAILED tests/unit_tests/test_smartllm.py::test_intermediate_output - IndexError: list index out of range
========== 2 failed, 28 passed, 16 warnings in 4.37s =========
```
### Expected behavior
No error | make test in experimental crash | https://api.github.com/repos/langchain-ai/langchain/issues/9581/comments | 2 | 2023-08-22T06:49:07Z | 2023-09-19T08:47:29Z | https://github.com/langchain-ai/langchain/issues/9581 | 1,860,722,360 | 9,581 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Add support for DocAI from Google Cloud as a pdf parser
### Motivation
It might be faster for large samples of documents (and maybe better in some cases).
### Your contribution
yep, I'm happy to | Add support for DocAI from Google Cloud as a pdf parser | https://api.github.com/repos/langchain-ai/langchain/issues/9578/comments | 2 | 2023-08-22T05:52:46Z | 2023-11-28T16:07:55Z | https://github.com/langchain-ai/langchain/issues/9578 | 1,860,625,662 | 9,578 |
[
"hwchase17",
"langchain"
]
| ### System Info
HuggingFaceEndpoint Returns an Empty String both when prompted using ._call() method and when used as an LLM in a QA chain. Examples of what I've tried, below:
```
from langchain.llms import HuggingFaceEndpoint
from langchain.chains.question_answering import load_qa_chain
from langchain.vectorstores import FAISS
from langchain.embeddings import HuggingFaceEmbeddings
### Example 1 -- returns empty string
endpoint_url = (
"my-real-endpoint-here"
)
llm = HuggingFaceEndpoint(
endpoint_url = endpoint_url,
huggingfacehub_api_token = os.environ['HUGGING_FACE_HUB_TOKEN'],
task = 'text2text-generation',
model_kwargs = {'temperature': 1e-20, "max_length": 900},
)
llm._call(prompt = "What is 4 + 4? Think the question through, step by step.")
>>> ""
### Example 2 -- used in QA chain and also returns empty string
### (exact same 'llm' object)
### text splitter
text_splitter = RecursiveCharacterTextSplitter(chunk_size = 1000, chunk_overlap = 15)
### embeddings
embeddings = HuggingFaceEmbeddings()
chain = load_qa_chain(llm, chain_type = "stuff", verbose = False)
def extract_from_case(file_str: str, prompt_question: str):
docs = text_splitter.split_text(file_str)
db = FAISS.from_texts(docs, embeddings)
docs = db.similarity_search(prompt_question)
extraction = chain.run(input_documents = docs, question = prompt_question)
return extraction
sample_file = """Metadata: Date: 2017-01-18 File number: CEL-62600-16 CEL-62600-16 Citation: CEL-62600-16 (Re), 2017, retrieved on 2023-05-16. Content: Arrears Worksheet File Number: CEL-62600-16 Time period for Arrears Owing From: September 1, 2016 to"""
prompt = "What is the file number of this case?"
extract_from_case(file_str = sample_file, prompt_question = prompt)
>>> ""
```
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
See above.
### Expected behavior
To return a non-empty response. | HuggingFaceEndpoint Returns an Empty String | https://api.github.com/repos/langchain-ai/langchain/issues/9576/comments | 2 | 2023-08-22T03:52:51Z | 2023-11-28T16:08:00Z | https://github.com/langchain-ai/langchain/issues/9576 | 1,860,497,298 | 9,576 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
I'm getting this:
Page Not Found
We could not find what you were looking for.
Please contact the owner of the site that linked you to the original URL and let them know their link is broken.
When visiting:
- https://python.langchain.com/docs/modules/chains/additional/
- https://python.langchain.com/docs/modules/chains/popular/
### Idea or request for content:
_No response_ | Additional/Popular chains Page Not Found | https://api.github.com/repos/langchain-ai/langchain/issues/9575/comments | 2 | 2023-08-22T03:24:06Z | 2023-11-28T16:08:05Z | https://github.com/langchain-ai/langchain/issues/9575 | 1,860,477,232 | 9,575 |
[
"hwchase17",
"langchain"
]
| ### Feature request
It might be very helpful to let users react with each page of the documentation so that contributors, users and people working will know which document has been out-dated / not usable.
### Motivation
I saw many inconsistencies in different docs and some codes are no longer usable as the dependancies update. It would be very help if there is an indicator of whether that documentation is still something we could reply on.
### Your contribution
I could contribute on the front-end but not sure how to configure it on langchain's database. | Adding thumbs-up and thumbs-down for documentations | https://api.github.com/repos/langchain-ai/langchain/issues/9559/comments | 1 | 2023-08-21T19:23:20Z | 2023-11-27T16:06:16Z | https://github.com/langchain-ai/langchain/issues/9559 | 1,859,998,066 | 9,559 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am trying to use a template to pre customize the AI, but my Code is not working!
Any idea why it's not working?
This is the basic part of my code:
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
template = """You are a customer service representative working for Amazon. You are having conversations with customers.
When asked about your profession, you should respond that you are a customer service representative for Amazon.
{memory}
#Human: {human_input}
#Chatbot:"""
prompt = PromptTemplate(input_variables=["memory", "human_input"],template=template)
memory = ConversationBufferMemory(memory_key="memory",prompt=prompt, return_messages=True)
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613",verbose=True)
def get_agent(self):
agent_kwargs = {
"extra_prompt_messages": [MessagesPlaceholder(variable_name="memory")],
}
agent = initialize_agent(
tools=self.tools,
llm=llm,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True,
agent_kwargs=agent_kwargs,
memory=memory,
prompt=prompt,
)
return agent
### Expected behavior
When I ask the AI what's her prefession is, she is telling
"I am an AI assistant designed to provide helpful information and assist with various tasks" istead of
"you are a customer service representative for Amazon" | using PromptTemplate with initialze_agent | https://api.github.com/repos/langchain-ai/langchain/issues/9553/comments | 8 | 2023-08-21T17:47:57Z | 2023-11-27T16:06:21Z | https://github.com/langchain-ai/langchain/issues/9553 | 1,859,861,388 | 9,553 |
[
"hwchase17",
"langchain"
]
| ### Feature request
appreciate your efforts on show how integrate Hugginface models with langchain: https://python.langchain.com/docs/integrations/llms/huggingface_pipelines
but this is just a shallow port from huggingface to langchain not yet a full integration on the [Chat model](https://github.com/langchain-ai/langchain/tree/master/libs/langchain/langchain/chat_models) class that require further inherence and special case development.
Can you guys spare some efforts to integrate the self-hosted version of llama 2 into the chat model?
### Motivation
How self-hosted llama 2 to fully integrate with langchain for avoid privacy and control the whole llm app development cycles.
### Your contribution
I'm willing to help to submit PR with proper guidance. | llama 2 through HF further integration with langchain on chat model | https://api.github.com/repos/langchain-ai/langchain/issues/9550/comments | 2 | 2023-08-21T17:37:45Z | 2023-11-27T16:06:26Z | https://github.com/langchain-ai/langchain/issues/9550 | 1,859,842,021 | 9,550 |
[
"hwchase17",
"langchain"
]
| ### Feature request
The RetryOutputParser class has several drawbacks:
- it must point to a chain different from the one that calls it
- it does not make it possible to specify a number of retries
### Motivation
Following a request, ChatGPT sometimes answers with an invalid format and the same request must be resent to get the expected output.
It would be useful to have a new parameter added to a LLMChain (endowed with an output parser) to be able to retry the chain till the output is validated by the parser (i.e. it does not trigger any exception), with the possibility to specify a maximum of retries. It could take this form (just a proposal):
```
chain = LLMChain(llm=..., prompt=..., output_parser=..., retry=RetryChain(max_tries=10))
```
### Your contribution
Well, I don't think this feature would be complex to develop for the developers that already know the code. Let me know if this request makes sense. | Being able to retry a chain until the output is valid | https://api.github.com/repos/langchain-ai/langchain/issues/9546/comments | 10 | 2023-08-21T16:17:22Z | 2024-05-21T16:07:30Z | https://github.com/langchain-ai/langchain/issues/9546 | 1,859,715,484 | 9,546 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am using the Cognitive search retriever according to the documentation, however I run into an error, regarding the type of return value.
**Reproduction**:
import os
from langchain.retrievers import AzureCognitiveSearchRetriever
os.environ["AZURE_COGNITIVE_SEARCH_SERVICE_NAME"] = ""
os.environ["AZURE_COGNITIVE_SEARCH_INDEX_NAME"] = ""
os.environ["AZURE_COGNITIVE_SEARCH_API_KEY"] = ""
retriever = AzureCognitiveSearchRetriever(content_key="text", top_k=5)
retriever.get_relevant_documents(query="What is langchain?")
**Expected behaviour**: return relevant documents/snippets
**Actual behaviour**:
ValidationError: 1 validation error for Document
page_content
str type expected (type=type_error.str)
It seems like the langchain retriever expects to receive somewhere a str, where azure cognitive search returns something else. Can anyone help/explain? The error message does not tell what type it receives and where the string should be, it would like to receive
### Suggestion:
_No response_ | Azure cognitive search retriever: ValidationError: 1 validation error for Document page_content | https://api.github.com/repos/langchain-ai/langchain/issues/9545/comments | 2 | 2023-08-21T15:58:11Z | 2023-08-21T19:33:47Z | https://github.com/langchain-ai/langchain/issues/9545 | 1,859,684,763 | 9,545 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain 0.0.268
Python 3.9
Windows 10
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.schema import BaseOutputParser
class CustomType:
pass
class CustomParser(BaseOutputParser):
model_config = ConfigDict(arbitrary_types_allowed=True)
c : CustomType
```
### Expected behavior
This triggers a RuntimeError since no validator could be found for CustomType object. It prevents the user from passing custom types to the constructor of a langchain's output parser.
Though, this problem has been corrected in the latest versions of pydantic (thanks to the flag arbitrary_types_allowed). After investigating the source code, one of the mother classes of BaseOutputParser is Serializable and the latter derives from... pydantic.v1.BaseModel instead of pydantic.BaseModel. Why is that? This seems to be the source of the problem. | Impossible to enrich BaseOutputParser with a custom object member | https://api.github.com/repos/langchain-ai/langchain/issues/9540/comments | 5 | 2023-08-21T14:43:51Z | 2024-06-01T00:07:33Z | https://github.com/langchain-ai/langchain/issues/9540 | 1,859,531,876 | 9,540 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am working with SQLDatabaseChain in Django framework everytime I execute the code it shows below error.
File "C:\Users\ehsuser\AppData\Local\Programs\Python\Python310\lib\socket.py", line 705, in readinto
return self._sock.recv_into(b)
ConnectionAbortedError: [WinError 10053] An established connection was aborted by the software in your host machine
Below is my code
user='root'
database= 'real_estate_chatbot_new'
password=''
host= 'localhost'
port= '3306'
table_name1='app_projects'
table_name2='app_projectimages'
db_uri=f"mysql+pymysql://{user}:{password}@{host}:{port}/{database}"
#input_db = SQLDatabase.from_uri('sqlite:///ashridhar.db')
input_db = SQLDatabase.from_uri(db_uri,include_tables=[table_name1,table_name2])
db_chain = SQLDatabaseChain.from_llm(llm, input_db, verbose=True)
prompt = """
- You're a real estate chatbot for Buy home that is going to answer to a potential lead so keep your messages dynamic and enthusiastic making the interactions lively and enjoyable!.
- Answer the question from database only if you don't find answer from database return a friendly message.
- don't use word like 'database' in answer.
- If it is not relevant to the database return connvencing and gratitude message.
- If it is realted to price return it in words suppose the price is 7000000 then return 70 lakhs."""
tools = [
Tool(
name="Real_Estate_Chatbot",
func=db_chain.run,
#description="Answer the question from database only if you don't find answer from database return a friendly message., You're a real estate sales agent that is going to answer to a potential lead so keep your messages dynamic and enthusiastic making the interactions lively and enjoyable!.",
description=prompt
),
]
agent_kwargs = {
"extra_prompt_messages": [MessagesPlaceholder(variable_name="memory")],
}
memory = ConversationBufferWindowMemory(memory_key="memory",k=4,return_messages=True)
agent = initialize_agent(
tools,
llm,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True,
agent_kwargs=agent_kwargs,
memory=memory,
)
response = agent.run(message)
return response
### Suggestion:
_No response_ | Issue: I am using GPT-4 with SQLDatabaseChain it shows ConnectionAbortedError: [WinError 10053] An established connection was aborted by the software in your host machine error | https://api.github.com/repos/langchain-ai/langchain/issues/9538/comments | 2 | 2023-08-21T14:18:28Z | 2023-11-27T16:06:31Z | https://github.com/langchain-ai/langchain/issues/9538 | 1,859,484,256 | 9,538 |
[
"hwchase17",
"langchain"
]
| ### System Info
```
pip freeze | grep langchain
langchain==0.0.268
langchainplus-sdk==0.0.17
```
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
```py
from typing import Any, Dict, List
import langchain
from langchain.cache import SQLiteCache
from langchain.callbacks.base import BaseCallbackHandler
from langchain.chat_models import ChatVertexAI
from langchain.llms import VertexAI
from langchain.schema import HumanMessage
langchain.llm_cache = SQLiteCache(database_path="langchain-llm-cache.db")
class CallbackHandler(BaseCallbackHandler):
run_inline = True
def __init__(self, logger=None):
self.messages = []
def on_llm_start(self, serialized: Dict[str, Any], prompts: List[str], run_id, parent_run_id, **kwargs: Any) -> Any:
self.messages.append(run_id)
callback = CallbackHandler()
bison_text_llm = VertexAI(model_name="text-bison@001", temperature=0.0, max_output_tokens=500, callbacks=[callback])
bison_chat_llm = ChatVertexAI(model_name="chat-bison@001", temperature=0.0, max_output_tokens=500, callbacks=[callback])
bison_chat_llm([HumanMessage(content="Hello, how are you?")])
assert len(callback.messages) == 1
bison_chat_llm([HumanMessage(content="Hello, how are you?")])
assert len(callback.messages) == 2
bison_text_llm("Hello, how are you?")
assert len(callback.messages) == 3
bison_text_llm("Hello, how are you?")
assert len(callback.messages) == 4
```
### Expected behavior
It's unclear to me whether callbacks should be called when call is cached, but we can see that chat and plain text models implement different behaviour.
Ideally, callbacks would be called, with a flag saying that call is cached. | Plain models are not calling callbacks when cached | https://api.github.com/repos/langchain-ai/langchain/issues/9537/comments | 7 | 2023-08-21T14:12:46Z | 2024-04-09T16:12:35Z | https://github.com/langchain-ai/langchain/issues/9537 | 1,859,472,931 | 9,537 |
[
"hwchase17",
"langchain"
]
| ### System Info
This regression affects Langchain >=0.0.262. This regression was introduced with #8965.
If an agent's output to input to a tool (e.g. to generate an `AgentAction`) contains either backticks (such as to represent a code block with ```), or embedded JSON (such as a structured JSON string in the `action_input` key), then the output parsing will fail.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce the behavior:
1. Generate an agent that outputs a markdown formatted code block with a language.
To input to this tool, an `AgentAction` step will be made, that might look like:
```
```json
{
"action": "Code generator tool",
"action_input": "Generate code to optimize ```python\nprint("hello world")```
}
```
(Can't show closing backticks above due to formatting issues, but assume there's closing backticks in the above code block)
2. An error will occur, as a result of being unable to parse the actions.
Using
`pattern = re.compile(r"```(?:json)?\n(.*)```", re.DOTALL)` works slightly better for both embedded JSON and backticks, but will result in unexpected behavior if there are multiple actions in a response
### Expected behavior
Should be able to parse a response with backticks or JSON inside the `action_input` key | Regression in structured_chat agent's Output parser | https://api.github.com/repos/langchain-ai/langchain/issues/9535/comments | 2 | 2023-08-21T14:05:33Z | 2023-11-27T16:06:36Z | https://github.com/langchain-ai/langchain/issues/9535 | 1,859,456,565 | 9,535 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version = 0.0.268
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have list of objects in roadmap.json and I tried to achieve the what was given in selfquery retriver documentation
```
with open('roadmap.json') as json_file:
allRoadmap = json.load(json_file)
docs = []
for roadmap in allRoadmap:
print(roadmap["name"])
for section in roadmap["sections"]:
single_doc = Document(
page_content=f"This section is related to {roadmap['name']}",
metadata={"roadmapName": roadmap["name"], "sectionTopic": section["name"]}
)
docs.append(single_doc)
print(section["name"])
print("\n")
vectorstore = Chroma.from_documents(docs, embeddings)
print(vectorstore)
for doc in docs:
print(doc)
```
This above code outputs
```
page_content='This section is related to HTML for beginners' metadata={'roadmapName': 'HTML for beginners', 'sectionTopic': 'HTML Basics'}
page_content='This section is related to HTML for beginners' metadata={'roadmapName': 'HTML for beginners', 'sectionTopic': 'HTML Tags and Elements'}
page_content='This section is related to HTML for beginners' metadata={'roadmapName': 'HTML for beginners', 'sectionTopic': 'Intermediate Concepts'}
page_content='This section is related to HTML for beginners' metadata={'roadmapName': 'HTML for beginners', 'sectionTopic': 'Advanced Concepts'}
page_content='This section is related to How Search Engine Works' metadata={'roadmapName': 'How Search Engine Works', 'sectionTopic': 'Internet'}
page_content='This section is related to How Search Engine Works' metadata={'roadmapName': 'How Search Engine Works', 'sectionTopic': 'Search Engine'}
page_content='This section is related to Github Roadmap' metadata={'roadmapName': 'Github Roadmap', 'sectionTopic': 'Section 1'}
page_content='This section is related to Bootstrap' metadata={'roadmapName': 'Bootstrap', 'sectionTopic': 'Introduction'}
page_content='This section is related to Bootstrap' metadata={'roadmapName': 'Bootstrap', 'sectionTopic': 'Concepts'}
page_content='This section is related to Bootstrap' metadata={'roadmapName': 'Bootstrap', 'sectionTopic': 'Hands On'}
page_content='This section is related to MongoDB' metadata={'roadmapName': 'MongoDB', 'sectionTopic': 'Introduction'}
page_content='This section is related to MongoDB' metadata={'roadmapName': 'MongoDB', 'sectionTopic': 'Queries'}
page_content='This section is related to MongoDB' metadata={'roadmapName': 'MongoDB', 'sectionTopic': 'Data Modeling'}
page_content='This section is related to MongoDB' metadata={'roadmapName': 'MongoDB', 'sectionTopic': 'Aggregation'}
page_content='This section is related to Python for Beginers' metadata={'roadmapName': 'Python for Beginers', 'sectionTopic': 'Getting Started'}
page_content='This section is related to Python for Beginers' metadata={'roadmapName': 'Python for Beginers', 'sectionTopic': 'Basics'}
page_content='This section is related to Python for Beginers' metadata={'roadmapName': 'Python for Beginers', 'sectionTopic': 'Conditions and Loops'}
page_content='This section is related to Python for Beginers' metadata={'roadmapName': 'Python for Beginers', 'sectionTopic': 'Arrays'}
page_content='This section is related to Python for Beginers' metadata={'roadmapName': 'Python for Beginers', 'sectionTopic': 'Functions'}
page_content='This section is related to Python for Beginers' metadata={'roadmapName': 'Python for Beginers', 'sectionTopic': 'Solving Problems'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Basics'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Color and Background'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Typography and Fonts'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Spacing in CSS'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Basic Styling'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Positioning Techniques'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Advanced Styling'}
page_content='This section is related to Google Colab' metadata={'roadmapName': 'Google Colab', 'sectionTopic': 'Section 1'}
page_content='This section is related to MySQL' metadata={'roadmapName': 'MySQL', 'sectionTopic': 'Section 1'}
page_content='This section is related to Docker' metadata={'roadmapName': 'Docker', 'sectionTopic': 'Section 1'}
page_content='This section is related to AWS Lambda' metadata={'roadmapName': 'AWS Lambda', 'sectionTopic': 'Section 1'}
page_content='This section is related to Java' metadata={'roadmapName': 'Java', 'sectionTopic': 'Section 1'}
```
I tried to retrieve data from using selfQuery
```
from langchain.llms import OpenAI
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chains.query_constructor.base import AttributeInfo
metadata_field_info = [
AttributeInfo(
name="sectionTopic",
description="The section topic of the roadmap",
type="string",
),
AttributeInfo(
name="roadmapName",
description="Name of the roadmap",
type="string",
),
]
document_content_description = "Roadmap section topics"
llm = OpenAI(temperature=0.1)
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True,
search_kwargs={"k": 7}
)
retriever.get_relevant_documents("What are unique section topics that are related to css")
```
For the above code it gave repeated document.
```
query='css' filter=None limit=None
[Document(page_content='This section is related to CSS for beginners', metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Advanced Styling'}),
Document(page_content='This section is related to CSS for beginners', metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Typography and Fonts'}),
Document(page_content='This section is related to CSS for beginners', metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Basic Styling'}),
Document(page_content='This section is related to CSS for beginners', metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Color and Background'}),
Document(page_content='This section is related to CSS for beginners', metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Spacing in CSS'}),
Document(page_content='This section is related to CSS for beginners', metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Basic Styling'}),
Document(page_content='This section is related to CSS for beginners', metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Advanced Styling'})]
```
### Expected behavior
It should return 7 unique documents
```
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Basics'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Color and Background'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Typography and Fonts'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Spacing in CSS'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Basic Styling'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Positioning Techniques'}
page_content='This section is related to CSS for beginners' metadata={'roadmapName': 'CSS for beginners', 'sectionTopic': 'Advanced Styling'}
``` | SelfQueryRetriever returns duplicate document data using with chromaDB | https://api.github.com/repos/langchain-ai/langchain/issues/9532/comments | 6 | 2023-08-21T12:15:55Z | 2024-03-26T13:13:52Z | https://github.com/langchain-ai/langchain/issues/9532 | 1,859,248,153 | 9,532 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain 0.0.74
Python 3.10
Windows 10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import UnstructuredExcelLoader
from langchain.indexes import VectorstoreIndexCreator
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
loader = UnstructuredExcelLoader("N:\Python\Data.xlsx", mode="elements")
index = VectorstoreIndexCreator().from_loaders([loader])
Gives the following:
Traceback (most recent call last):
Cell In[33], line 1
index = VectorstoreIndexCreator().from_loaders([loader])
File C:\Program Files\Anaconda3\lib\site-packages\langchain\indexes\vectorstore.py:73 in from_loaders
File C:\Program Files\Anaconda3\lib\site-packages\langchain\indexes\vectorstore.py:77 in from_documents
AttributeError: 'RecursiveCharacterTextSplitter' object has no attribute 'split_documents'
### Expected behavior
Hi,
When using the VectorstoreIndexCreator, I get an error:
AttributeError: 'RecursiveCharacterTextSplitter' object has no attribute 'split_documents'
In this case, it is when I want to upload an excel file, but I get the same when trying to upload a .txt get the same error when I try to upload .txt files (with TextLoader).
Many thanks for your help! | AttributeError: 'RecursiveCharacterTextSplitter' object has no attribute 'split_documents' | https://api.github.com/repos/langchain-ai/langchain/issues/9528/comments | 2 | 2023-08-21T08:54:33Z | 2023-11-27T16:06:46Z | https://github.com/langchain-ai/langchain/issues/9528 | 1,858,916,366 | 9,528 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi,
Iam using langchain version 0.0.187.
Here are the steps that I have followed to get chat response uisng langchain
```
def initlize_llm_chain_prompt(self):
print("initilizing llm chain prompt")
llm_model=AzureOpenAI(deployment_name=self.AZURE_OPENAI_CHATGPT_DEPLOYMENT,model_name=os.environ.get("AZURE_OPENAI_CHATGPT_MODEL"),
temperature=0,
max_tokens=1000)
llm_chat_prompt = PromptTemplate(input_variables=["question"], template=self.prompt_template)
question_generator = LLMChain(llm=llm_model, prompt=llm_chat_prompt)
doc_chain=load_qa_chain(llm_model, chain_type="stuff")
memory = ConversationBufferWindowMemory(memory_key="chat_history",return_messages=True,k=4) # remember last 4 conversations
self.llm_chain_prompt=ConversationalRetrievalChain(retriever=self.elastic_vector_search.as_retriever(search_kwargs={"k": 5}),
memory=memory,question_generator=question_generator,combine_docs_chain=doc_chain,verbose=True,get_chat_history=self.get_chat_history_custom)
```
**_Here is how Iam calling chat_**
`result=self.llm_chain({"question": user_query}, )`
Here is the Prompt template format
```
prompt_template = """<|im_start|>System
Answer ONLY with the facts listed in the Referenced documents.
If there isn't enough information in the Sources , say you don't know.
Do not strictly generate answers if its not available in the Source and even though if you know the answer for it.
<|im_end|>
<|im_start|>
UserQuery:
{question}
<|im_end|>
<|im_start|>Chatbot:
"""
```
In get_chat_history_custom() Iam just appending the conversation history data
```
def get_chat_history_custom(self,inputs):
res = []
if(len(inputs)>0):
inp_len=len(inputs)
for i in range(0,inp_len,2):
human_content = inputs[i].content
ai_content=inputs[i+1].content.split('Question')[0]
res.append(f"Human:{human_content}\AIMessage:{ai_content}")
buf="\n".join(res)
return buf
else:
return ""
```
What I have observed is Iam not getting good responses even though Iam providing the relavent data as the source to the chat. Iam getiing good and expected answers if I use open AI completion call. Can I please knoww, If any of the above steps is wrong? and Do you recommond any other version langchain?
### Suggestion:
Chatbot responses are not getting as expected in langchain on compared to the responses of open AI completion | Chatbot responses are deterorating on using langchain compared to open AI completion | https://api.github.com/repos/langchain-ai/langchain/issues/9526/comments | 2 | 2023-08-21T08:00:24Z | 2023-11-30T16:07:01Z | https://github.com/langchain-ai/langchain/issues/9526 | 1,858,823,860 | 9,526 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Issue you'd like to raise.
langchain==0.0.162
Hi, I am using a fine tuned model that generates user prompts to SQL queries, instead of the default model provided by langchain. The reason for doing this is because langchain does not know about all the data from the database unless you provide context to it but there is a lot of data hence why it can create incorrect SQL queries also it is unable to form complex queries for similar reasons even after you give it context.
So my question is based upon the output I am getting below. Is there a way to keep the initial question I asked the same throughout like in the action input, instead of "SOME_BRANCH_NAME" I want the entire sentence to go through to the SQLDatabaseChain like the user initially asked which is "what is the summary of last 3 issues reported by SOME_BRANCH_NAME". Basically since the Action Input is different from what the user asked, it is generating the wrong SQL query since what it should be doing is this, "SELECT summary FROM sla_tat_summary WHERE organization like '%SOME_BRANCH_NAME%' ORDER BY ReportedDate DESC LIMIT 3;" instead of what is shown below. I could just use the SQLDatabaseChain on its own which does get the exact query I want since I was able to make sure only the prompt the user gave went through, but the agent is needed since I am using it for things other than SQL generation.
user prompt: what is the summary of last 3 issues reported by SOME_BRANCH_NAME
Entering new AgentExecutor chain...
I need to find out what the last 3 issues reported by SOME_BRANCH_NAME were.
Action: TPS Issue Tracker Database
Action Input: SOME_BRANCH_NAME
Entering new SQLDatabaseChain chain...
SOME_BRANCH_NAME:
SELECT organization, COUNT() FROM sla_tat_summary WHERE severity = 'Level 2 - Critical' GROUP BY organization ORDER BY COUNT() DESC LIMIT 1
In summary I want to have an option to keep my user prompt unchanged throughout the flow from the agent to the to SQLdatabasechain
### Motivation
I need a way for langchain to be able to use fine-tuned models for multi-class classification and have a way to not use hard-coded stopping points rather have it as a parameter for a developer to select.
### Your contribution
my custom langchain package would only integrate well with my own use case hence why I am not submitting a PR. Also even though I am using langchain==0.0.162. But the issue would be similar in the latest langchain version | Request for using custom fine tuned models | https://api.github.com/repos/langchain-ai/langchain/issues/9523/comments | 2 | 2023-08-21T05:59:17Z | 2023-11-27T16:06:56Z | https://github.com/langchain-ai/langchain/issues/9523 | 1,858,654,621 | 9,523 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
This is my code, I'm not sure if it's correct:
`from langchain.llms import ChatGLM
from langchain.agents import load_tools, initialize_agent
from langchain.agents import AgentType, tool
from datetime import date
@tool
def time(text: str) -> str:
"""返回今天的日期,用于回答用户关于今天日期的问题。
输入应该始终为空字符串,此函数将始终返回今天的日期。
任何日期计算都应在此函数外部进行。"""
return str(date.today())
endpoint_url = "http://0.0.0.0:8000"
llm = ChatGLM(
endpoint_url=endpoint_url,
max_token=80000,
# history=[["我有一些问题!","我可以回答你的任何问题,请向我提问!"]],
top_p=0.9,
model_kwargs={"sample_model_args": False},
)
tools = load_tools([], llm=llm)
agent = initialize_agent(
tools + [time],
llm,
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True,
verbose=True
)
try:
result = agent.run("今天是几号?")
print(result)
except Exception as e:
print("外部访问异常:", e)`
However, the answer after running is not accurate:
`(chatglm_env) root@autodl-container-b25f1193e8-0ae83321:~/autodl-tmp# python t_n.py
> Entering new AgentExecutor chain...
Could not parse LLM output: 今天是2023年2月18日。
Observation: Invalid or incomplete response
Thought:Could not parse LLM output: Today is February 18th, 2023.
Observation: Invalid or incomplete response
Thought:Final Answer: 今天是2023年2月18日。
> Finished chain.
今天是2023年2月18日。`
Please help me
### Suggestion:
_No response_ | Issue: Proxy not successfully used | https://api.github.com/repos/langchain-ai/langchain/issues/9522/comments | 6 | 2023-08-21T03:49:38Z | 2023-11-28T16:08:20Z | https://github.com/langchain-ai/langchain/issues/9522 | 1,858,529,130 | 9,522 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version 0.0.268
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Load in .yaml file containing the OpenAPI specification.
2. Call the OpenAPISpec with ` open_api_spec = OpenAPISpec.from_file(file_path)`
Error shows that in _openapi.py_ line 202 ,`return super().parse_obj(obj)` no longer inherits the super method from Pydantic's BaseModel parse_obj.
### Expected behavior
The OpenAPISpec should be delivered so that it can be passed on to `NLAToolkit.from_llm_and_spec`. | AttributeError: 'super' object has no attribute 'parse_obj' when using OpenAPISpec.from_file | https://api.github.com/repos/langchain-ai/langchain/issues/9520/comments | 15 | 2023-08-21T02:28:34Z | 2024-06-26T11:19:14Z | https://github.com/langchain-ai/langchain/issues/9520 | 1,858,436,969 | 9,520 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.268
python=3.9
mac
### Who can help?
I used the FAISS as the vector store. it seems that the `similarity_search_with_score` (supposedly ranked by distance: low to high) and `similarity_search_with_relevance_scores`((supposedly ranked by relevance: high to low) produce conflicting results when specifying `MAX_INNER_PRODUCT` as the distance strategy. Please see the screenshot below:
<img width="675" alt="Screenshot 2023-08-20 at 6 54 15 PM" src="https://github.com/langchain-ai/langchain/assets/7220686/88981277-a0b4-462b-929c-63bd19d4faff">
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.vectorstores.faiss import FAISS
from langchain.embeddings import HuggingFaceEmbeddings
embedding_engine = HuggingFaceEmbeddings(
model_name= "BAAI/bge-base-en", #"sentence-transformers/all-mpnet-base-v2",
model_kwargs={"device": "cpu"},
encode_kwargs={'normalize_embeddings': True} # set True to compute cosine similarity
)
texts = ["I like apples", "I hate apples", "I like oranges"]
simple_vecdb = FAISS.from_texts(texts,
embedding_engine,
distance_strategy="MAX_INNER_PRODUCT")
# test 1
simple_vecdb.similarity_search_with_score("I like apples")
# test 2
simple_vecdb.similarity_search_with_relevance_scores("I like apples")
### Expected behavior
For similarity_search_with_score, if the `similarity_search_with_score` documentation is correct saying that "List of documents most similar to the query text with L2 distance in float. Lower score represents more similarity.", then the result from `similarity_search_with_score` should have matched the identical text `I like apple` with `(distance)score=0` and the `similarity_search_with_relevance_scores` with `(relevance)score=1`, no?
if the FAISS return the cosine similarity score already during the `similarity_search_with_score`call, should the `_max_inner_product_relevance_score_fn` for the FAISS just return the identical score instead of `1-score` when calculating the relevance score? | FAISS vectorstore `similarity_search_with_relevance_scores` returns strange/false result with `MAX_INNER_PRODUCT` | https://api.github.com/repos/langchain-ai/langchain/issues/9519/comments | 3 | 2023-08-21T02:21:28Z | 2023-12-25T16:09:00Z | https://github.com/langchain-ai/langchain/issues/9519 | 1,858,432,029 | 9,519 |
[
"hwchase17",
"langchain"
]
| https://github.com/langchain-ai/langchain/blame/e51bccdb2890fa193ce7eb5bf7e13c28afef4dc4/libs/langchain/langchain/vectorstores/pgvector.py#L117
@hwchase17 | pgvector extension is not installed | https://api.github.com/repos/langchain-ai/langchain/issues/9511/comments | 2 | 2023-08-20T12:52:42Z | 2023-12-06T17:44:06Z | https://github.com/langchain-ai/langchain/issues/9511 | 1,858,114,722 | 9,511 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python 3.8.16
langchain==0.0.268
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`ChatAnthropic` doesn't have `model_name` attribute. instead, it has a `model` attribute that specifies the name. Other chat models such as `ChatOpenAI`, `ChatVertexAI` has `model_name` attribute, this breaks the interface while integrating with multiple LLMs.
### Expected behavior
`llm = ChatAnthropic(model_name='claude-2')`
`print(llm.model_name)` | missing model_name param in ChatAnthropic | https://api.github.com/repos/langchain-ai/langchain/issues/9510/comments | 4 | 2023-08-20T12:37:45Z | 2023-12-07T16:06:40Z | https://github.com/langchain-ai/langchain/issues/9510 | 1,858,110,406 | 9,510 |
[
"hwchase17",
"langchain"
]
| ### System Info
`ValueError Traceback (most recent call last)
Cell In[35], line 3
1 query = "What did the president say about the Supreme Court"
2 docs = db.similarity_search(query)
----> 3 chain.run(input_documents=docs, question=query)
File [~/.local/lib/python3.10/site-packages/langchain/chains/base.py:480](https://file+.vscode-resource.vscode-cdn.net/home/moghadas/toy_projects/~/.local/lib/python3.10/site-packages/langchain/chains/base.py:480), in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
475 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
476 _output_key
477 ]
479 if kwargs and not args:
--> 480 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
481 _output_key
482 ]
484 if not kwargs and not args:
485 raise ValueError(
486 "`run` supported with either positional arguments or keyword arguments,"
487 " but none were provided."
488 )
File [~/.local/lib/python3.10/site-packages/langchain/chains/base.py:282](https://file+.vscode-resource.vscode-cdn.net/home/moghadas/toy_projects/~/.local/lib/python3.10/site-packages/langchain/chains/base.py:282), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
280 except (KeyboardInterrupt, Exception) as e:
281 run_manager.on_chain_error(e)
--> 282 raise e
...
113 if self.client.task == "text-generation":
114 # Text generation return includes the starter text.
115 text = response[0]["generated_text"][len(prompt) :]
ValueError: Error raised by inference API: Model google/flan-t5-xl time out`
This exception should be handled
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`query = "What did the president say about the Supreme Court"
docs = db.similarity_search(query)
chain.run(input_documents=docs, question=query)`
### Expected behavior
Retry mechanism, or caching | TimeOutError unhandled | https://api.github.com/repos/langchain-ai/langchain/issues/9509/comments | 2 | 2023-08-20T11:50:21Z | 2023-11-26T16:06:29Z | https://github.com/langchain-ai/langchain/issues/9509 | 1,858,096,249 | 9,509 |
[
"hwchase17",
"langchain"
]
| how to improve the perfromance of agents to get better responses from the local model like gpt4all | how to improve the perfromance of agents to get better responses from the local model like gpt4all | https://api.github.com/repos/langchain-ai/langchain/issues/9506/comments | 6 | 2023-08-20T06:25:58Z | 2023-12-02T16:06:07Z | https://github.com/langchain-ai/langchain/issues/9506 | 1,857,998,389 | 9,506 |
[
"hwchase17",
"langchain"
]
| The `QAGenerationChain` as it is currently written is prone to a `JSONDecodeError`, as mentioned in https://github.com/langchain-ai/langchain/pull/9503. That was my naive attempt to fix the problem I was having, but as I explained in closing the PR, I think a `PydanticOutputParser` with a more instructive prompt or an auto-fixing parser would be more robust. Plus, I think the successful runs after implementing my fix were just luck. 🤣
In `QAGenerationChain._call`, after generation, `json.loads` frequently raises a `JSONDecodeError`. This is usually because the response is wrapped in Markdown code tags or prefaced with a message, as seen in the following traces:
- `ChatOpenAI` gpt-3.5-turbo: [successful trace](https://smith.langchain.com/public/6c42845c-bbe2-41c1-99a4-7669112f1504/r)
- `ChatOpenAI` gpt-4: [`JSONDecodeError` due to Markdown formatting](https://smith.langchain.com/public/d235ef1e-fb13-4177-adb3-6ddaf159bfea/r)
- `ChatAnthropic` Claude 2: [`JSONDecodeError` due to message before the JSON + Markdown formatting](https://smith.langchain.com/public/27eed20e-9ca5-4b4a-a3d3-71efde7aee3c/r)
- `ChatAnyscale` llama 2 70b: [successful trace](https://smith.langchain.com/public/e8f0b44c-424f-47ef-8cb3-31713b548d46/r?tab=0)
- `ChatAnyscale` llama 2 70b: [`JSONDecodeError` due to message before the JSON + Markdown formatting](https://smith.langchain.com/public/c89a7415-477d-4b82-a0a4-0a333b8b5680/r)
Each of these models had a mix of successful and failed runs. I probably should have lowered the temperature and tried non-chat models as well.

I would like to help however I could on this, I just want to be sure my fix aligns with the greater vision for this chain since it seems particularly useful. :)
### System Info
Version info from LangSmith:
```
RUNTIME
langchain_version: "0.0.268"
library: "langchain"
library_version: "0.0.268"
platform: "Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.36"
runtime: "python"
runtime_version: "3.11.4"
sdk_version: "0.0.25"
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import QAGenerationChain
# tested with gpt-3.5-turbo, gpt-4, claude 2, and llama 2 70b
qagen = QAGenerationChain.from_llm(llm=llm)
question = qagen.run(docs[0].page_content)
# JSONDecodeError (sometimes)
```
The error is inconsistent, as seen in the screenshot above. This is for reasons we're all familiar with: perhaps the output is wrapped in Markdown code formatting with triple-backticks, maybe it's prefaced with a "sure, here's your JSON", etc.
Relevant examples from LangSmith:
- `ChatOpenAI` gpt-3.5-turbo: [successful trace](https://smith.langchain.com/public/6c42845c-bbe2-41c1-99a4-7669112f1504/r)
- `ChatOpenAI` gpt-4: [`JSONDecodeError` due to Markdown formatting](https://smith.langchain.com/public/d235ef1e-fb13-4177-adb3-6ddaf159bfea/r)
- `ChatAnthropic` Claude 2: [`JSONDecodeError` due to message before the JSON + Markdown formatting](https://smith.langchain.com/public/27eed20e-9ca5-4b4a-a3d3-71efde7aee3c/r)
- `ChatAnyscale` llama 2 70b: [successful trace](https://smith.langchain.com/public/e8f0b44c-424f-47ef-8cb3-31713b548d46/r?tab=0)
- `ChatAnyscale` llama 2 70b: [`JSONDecodeError` due to message before the JSON + Markdown formatting](https://smith.langchain.com/public/c89a7415-477d-4b82-a0a4-0a333b8b5680/r)
### Expected behavior
`QAGenerationChain.run` should return its expected output with no `JSONDecodeError`. I realize there's only so much we can do for the chain's default settings, and an LLM call can be inherently predictable, but some parsing and error handling would be nice. :) | QAGenerationChain.run often ends in JSONDecodeError | https://api.github.com/repos/langchain-ai/langchain/issues/9505/comments | 6 | 2023-08-20T06:01:56Z | 2024-02-08T03:16:35Z | https://github.com/langchain-ai/langchain/issues/9505 | 1,857,988,199 | 9,505 |
[
"hwchase17",
"langchain"
]
| ### System Info
! pip show langchain
Name: langchain
Version: 0.0.268
Summary: Building applications with LLMs through composability
Home-page: https://www.github.com/hwchase17/langchain
! pip show openlm
Name: openlm
Version: 0.0.5
Summary: Drop-in OpenAI-compatible that can call LLMs from other providers
Python 3.11.3
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`
! pip install openlm
! pip install langchain
from getpass import getpass
import os
from langchain.llms import OpenLM
from langchain import PromptTemplate, LLMChain
os.environ['OPENAI_API_KEY'] = "<openai-api-key>"
os.environ['HF_API_TOKEN'] = "<hf-api-key>"
if "OPENAI_API_KEY" not in os.environ:
print("Enter your OpenAI API key:")
os.environ["OPENAI_API_KEY"] = getpass()
if "HF_API_TOKEN" not in os.environ:
print("Enter your HuggingFace Hub API key:")
os.environ["HF_API_TOKEN"] = getpass()
question = "What is the capital of France?"
template = """Question: {question} Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
for model in ["text-davinci-003", "huggingface.co/gpt2"]:
llm = OpenLM(model=model)
llm_chain = LLMChain(prompt=prompt, llm=llm)
result = llm_chain.run(question)
print( """Model: {} Result: {}""".format(model, result))
`
When I run the above code, I get the following error:
TypeError: Completion.create() got an unexpected keyword argument 'api_key'
Can I anyone help me to fix this issue ?
### Expected behavior
As mentioned the below document,
https://python.langchain.com/docs/integrations/llms/openlm
Model: text-davinci-003
Result: France is a country in Europe. The capital of France is Paris.
Model: huggingface.co/gpt2
Result: Question: What is the capital of France?
Answer: Let's think step by step. I am not going to lie, this is a complicated issue, and I don't see any solutions to all this, but it is still far more | LLM Chain: Unable to run OpenLM - unexpected keyword argument 'api_key' in Completion.create class | https://api.github.com/repos/langchain-ai/langchain/issues/9504/comments | 6 | 2023-08-20T05:26:12Z | 2023-11-27T16:07:11Z | https://github.com/langchain-ai/langchain/issues/9504 | 1,857,958,938 | 9,504 |
[
"hwchase17",
"langchain"
]
| > from langchain.llms import HuggingFacePipeline
`ImportError: cannot import name 'HuggingFacePipeline' from 'langchain.llms'` | Issue: Can't import HuggingFacePipeline | https://api.github.com/repos/langchain-ai/langchain/issues/9502/comments | 12 | 2023-08-20T02:53:24Z | 2024-06-13T20:44:44Z | https://github.com/langchain-ai/langchain/issues/9502 | 1,857,932,992 | 9,502 |
[
"hwchase17",
"langchain"
]
| ### System Info
requirements.txt:
```
langchain==0.0.254
atlassian-python-api==3.36.0
chromadb==0.3.25
huggingface-hub==0.16.4
torch==2.0.1
sentence-transformers==2.2.2
InstructorEmbedding==1.0.0
p4python==2023.1.2454917
lxml==4.9.2
bs4==0.0.1
```
Dockerfile
```Dockerfile
FROM python:3.10
# Create a directory for your application
WORKDIR /app
COPY requirements.txt .
# Upgrade pip
RUN wget https://bootstrap.pypa.io/get-pip.py
RUN python3.10 get-pip.py
RUN python3.10 -m pip install --upgrade pip
RUN python3.10 -m pip install -r requirements.txt
# RUN python3.10 preload.py
RUN python3.10 -m pip install openai tiktoken
RUN apt-get update && apt-get install -y vim
COPY . .
ENTRYPOINT sleep infinity;
```
Host where I am running Docker: (Am also running python3.10 on this)
```bash
Software:
System Software Overview:
System Version: macOS 13.4 (22F66)
Kernel Version: Darwin 22.5.0
Boot Volume: Macintosh HD
Boot Mode: Normal
Hardware:
Hardware Overview:
Model Name: MacBook Pro
Model Identifier: Mac14,10
Chip: Apple M2 Pro
Total Number of Cores: 12 (8 performance and 4 efficiency)
Memory: 32 GB
```
### Who can help?
@agola11 @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run this script with above requirements.txt in docker container set up by Dockerfile
```python
from langchain.embeddings import HuggingFaceEmbeddings, HuggingFaceInstructEmbeddings
from langchain.embeddings import OpenAIEmbeddings
from langchain.document_loaders import ConfluenceLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
loader = ConfluenceLoader(...)
documents = loader.load(...)
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
embeddings = HuggingFaceInstructEmbeddings(model_name= "hkunlp/instructor-large", model_kwargs={"device": "cpu"})
#Vector Database Storage: Store the generated embeddings in a vector database, allowing for efficient similarity searches.
db = Chroma.from_documents(texts, embeddings) # Stuck here
```
Successful runs where I am able to upload embeddings to db:
- Running this on my mac (no Docker)
- Running this in Docker container with OpenAIEmbeddings
Runs that get stuck:
- Running this in Docker container with HuggingFaceEmbeddings or HuggingFaceInstructEmbeddings
When I manually interrupt that stuck process, here is the traceback:
```bash
raceback (most recent call last):
File "/app/context.py", line 26, in <module>
ctx_manager.register_documents(paths, text_splitter_type, text_splitter_params,
File "/app/context_manager/context_manager/context_manager.py", line 197, in register_documents
self._vectordb_manager.add_documents(texts, embeddings,
File "/app/context_manager/db_managers/db_managers.py", line 92, in add_documents
vectorstore = Chroma.from_documents(
File "/usr/local/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 603, in from_documents
return cls.from_texts(
File "/usr/local/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 567, in from_texts
chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
File "/usr/local/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 187, in add_texts
embeddings = self._embedding_function.embed_documents(texts)
File "/usr/local/lib/python3.10/site-packages/langchain/embeddings/huggingface.py", line 77, in embed_documents
embeddings = self.client.encode(texts, **self.encode_kwargs)
File "/usr/local/lib/python3.10/site-packages/sentence_transformers/SentenceTransformer.py", line 165, in encode
out_features = self.forward(features)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/container.py", line 217, in forward
input = module(input)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/sentence_transformers/models/Transformer.py", line 74, in forward
output_states = self.auto_model(**trans_features, return_dict=False)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 1935, in forward
encoder_outputs = self.encoder(
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 1094, in forward
layer_outputs = layer_module(
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 754, in forward
hidden_states = self.layer[-1](hidden_states)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 343, in forward
forwarded_states = self.DenseReluDense(forwarded_states)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 288, in forward
hidden_states = self.wi(hidden_states)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
KeyboardInterrupt
```
I thought it may be the CPU usage, but on my Mac, the CPU usage maxes out at around 270% and does not get stuck. However, in Docker, the CPU usage maxes out to 600% (reserved 6 CPUs), and still gets stuck. Memory is also lower than the limit in both environements.
### Expected behavior
Texts are supposed to get encoded then uploaded to vectorDB | HuggingFaceInstructEmbeddings hangs in Docker container, but runs fine on MacOS | https://api.github.com/repos/langchain-ai/langchain/issues/9498/comments | 9 | 2023-08-19T20:07:57Z | 2024-03-13T20:01:26Z | https://github.com/langchain-ai/langchain/issues/9498 | 1,857,854,089 | 9,498 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I have the following code:
```
pipe = transformers.pipeline(
model=llm_model,
tokenizer=tokenizer,
return_full_text=True,
task='text-generation',
temperature=0.2,
max_new_tokens=200
)
llm = HuggingFacePipeline(pipeline=pipe)
retriever = vector_db.as_retriever(search_kwargs={"k": 4})
qa = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
```
When I call qa(query) it executes in about 24 seconds. But when I call llm with the same query and context obtained from retriever.get_relevant_documents method it gives me different result (but still relevant) and executes in 39 seconds, almost twice as long as qa. So i would like to understand what happens under the hood when I call RetrievalQA and why results are so different in speed.
### Suggestion:
_No response_ | Issue: RetrievalQA runs twice faster than huggingface model call | https://api.github.com/repos/langchain-ai/langchain/issues/9492/comments | 5 | 2023-08-19T15:26:52Z | 2023-11-30T16:07:16Z | https://github.com/langchain-ai/langchain/issues/9492 | 1,857,775,366 | 9,492 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I bumped langchain to version 0.0.268 and encountered below error while implementing 'with_fallbacks'.
Error:
pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain
llm
Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, apredict, apredict_messages, generate_prompt, invoke, predict, predict_messages (type=type_error)
Sample Code:
chat_llm = ChatOpenAI(model_name='gpt-3.5-turbo',
temperature=0,
model_kwargs={'top_p': 0.95},
max_tokens=512,
streaming=False,
verbose=False
)
fallback_llm = ChatOpenAI(model_name='gpt-4',
temperature=0,
model_kwargs={'top_p': 0.95},
max_tokens=512,
streaming=False,
verbose=False
)
chat_llm.with_fallbacks([fallback_llm], exceptions_to_handle=(Exception,))
Pydantic version: 1.10.12
Is this because of pydantic migration? Any immediate fixes or should i wait till Aug 25th for the migration to complete?
### Suggestion:
_No response_ | Issue: RunnableWithFallbacks: Can't instantiate abstract class BaseLanguageModel ??? | https://api.github.com/repos/langchain-ai/langchain/issues/9489/comments | 19 | 2023-08-19T11:55:03Z | 2024-05-21T16:35:19Z | https://github.com/langchain-ai/langchain/issues/9489 | 1,857,687,357 | 9,489 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
May I ask if there are any plans to support baiduwenxin llm in the near future? I saw that the JavaScript version has baiduwenxin, but Python does not
### Suggestion:
_No response_ | [ask]: about baiduwenxin LLM | https://api.github.com/repos/langchain-ai/langchain/issues/9488/comments | 2 | 2023-08-19T07:19:40Z | 2023-11-25T16:06:48Z | https://github.com/langchain-ai/langchain/issues/9488 | 1,857,615,923 | 9,488 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I use embedding model from huggingface vinai/phobert-base:

Then it has this problem:
WARNING:sentence_transformers.SentenceTransformer:No sentence-transformers model found with name /root/.cache/torch/sentence_transformers/vinai_phobert-base. Creating a new one with MEAN pooling.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
### Suggestion:
_No response_ | Problem with embedding model | https://api.github.com/repos/langchain-ai/langchain/issues/9486/comments | 12 | 2023-08-19T03:25:59Z | 2023-12-02T16:06:12Z | https://github.com/langchain-ai/langchain/issues/9486 | 1,857,522,891 | 9,486 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
In my problem statement I am defining multiple tools which are all based on retrieval chain and I am using OpenAiAgent as the agent to call this tools.
So when I am running a query it invokes to a single vectorstore. But I need my agent to access different vectorstores if the answer is not fetched from that particular vectorstore.I know the approach to get this working done by other agents.But I need this to be done by openaiAgent.
### Suggestion:
_No response_ | Invoke Multiple tools | https://api.github.com/repos/langchain-ai/langchain/issues/9483/comments | 3 | 2023-08-18T23:09:08Z | 2023-11-24T16:06:09Z | https://github.com/langchain-ai/langchain/issues/9483 | 1,857,382,588 | 9,483 |
[
"hwchase17",
"langchain"
]
| ### System Info
Name: langchain
Version: 0.0.268
Python 3.10.10
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
`
from langchain import PromptTemplate, OpenAI, LLMChain
prompt_template = "You're a helpful assistant that answers all questions?"
llm1 = OpenAI(model="foo", temperature=0)
llm_chain1 = LLMChain(
llm=llm1,
prompt=PromptTemplate.from_template(prompt_template)
)
llm2 = OpenAI(temperature=0)
llm_chain2 = LLMChain(
llm=llm2,
prompt=PromptTemplate.from_template(prompt_template)
)
llm_chain = llm_chain1.with_fallbacks([llm_chain2])
search_desc = "Use this tool to answer user questions when asked to Search the Web."
prefix = """ You're a helpful Assistant Chatbot. """
suffix = """{chat_history} {input} {agent_scratchpad}"""
from langchain.utilities import SerpAPIWrapper
from langchain.agents import (
ZeroShotAgent,
Tool,
AgentExecutor,
)
search = SerpAPIWrapper()
tools = [
Tool(
func=search.run, description=search_desc, name="Search_the_web"
)
]
input_variables = ["input", "chat_history", "agent_scratchpad"]
prompt = ZeroShotAgent.create_prompt(
tools,
prefix=prefix,
suffix=suffix,
input_variables=input_variables,
)
tool_names = [tool.name for tool in tools]
agent = ZeroShotAgent(
llm_chain=llm_chain,
allowed_tools=tool_names,
verbose=True,
handle_parsing_errors="ignore",
)
response = AgentExecutor.from_agent_and_tools(
agent=agent, tools=tools, verbose=True
)
`
### Expected behavior
To be able to pass RunnableWithFallbacks as an alternative to LLMChain in ZeroShotAgent. | RunnableWithFallbacks | https://api.github.com/repos/langchain-ai/langchain/issues/9474/comments | 1 | 2023-08-18T20:22:34Z | 2023-10-15T15:32:35Z | https://github.com/langchain-ai/langchain/issues/9474 | 1,857,249,373 | 9,474 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain 0.0.264
sqlalchemy 1.4.39
Platform mac os ventura 13.2.1
Python 3.11.4
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. from langchain.utilities import SQLDatabase
2. create a connection string (conn) with mssql dialect
3. create a connection using db = SQLDatabase.from_uri(conn)
4. use db.get_usable_table_names() to print table names
5. you will see an empty list returning
### Expected behavior
it should return table names from the database hosted in mssql | SQLDatabase object returns empty list with get_usable_table_names() | https://api.github.com/repos/langchain-ai/langchain/issues/9469/comments | 7 | 2023-08-18T18:43:24Z | 2024-01-30T00:41:12Z | https://github.com/langchain-ai/langchain/issues/9469 | 1,857,140,529 | 9,469 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I propose an enhancement to the pgvector functionality: the addition of an update feature in [pgvector.py](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/pgvector.py). As it stands, the code only seems to allow for the addition of new embeddings. I believe a specific function, potentially named `PGVector.update_documents()`, would greatly improve the utility of this module.
### Motivation
The inspiration for this proposal comes from an opportunity I've identified in my project to enhance efficacy and efficiency. I believe that the ability to update embeddings when one of my documents changes would streamline the process significantly. The current workaround of deleting the entire collection or removing all embeddings for the target document to save new ones, though functional, leaves room for improvement. This feature would not only optimize my project but could also benefit other users of pgvector who might face a similar need.
### Your contribution
I am prepared and eager to contribute to the creation of this feature. The `PGVector.update_documents()` function I propose would:
1. Retrieve the current list of embeddings for the document and load it into memory.
2. Obtain the chunks of the updated document.
3. Compare the updated document's chunks with the existing embeddings in memory to identify matches, potentially saving unnecessary model calls for existing embeddings. If the embedding doesn't exist in memory, the model would be called to generate the embedding.
4. Issue an update to the DB for any changing embeddings, an insert for any new ones, and a delete for any missing embeddings.
I am willing to submit a PR for this feature upon receiving feedback from the langchain maintainers. | Add Functionality to Update Embeddings in pgvector | https://api.github.com/repos/langchain-ai/langchain/issues/9461/comments | 7 | 2023-08-18T17:07:12Z | 2024-01-18T01:38:08Z | https://github.com/langchain-ai/langchain/issues/9461 | 1,857,029,157 | 9,461 |
[
"hwchase17",
"langchain"
]
| The following code throws ```langchain.schema.output_parser.OutputParserException``` 90% of the time:
```python
model = ChatOpenAI(model='gpt-4-0613') # code generation on gpt-3.5 isn't strong enough
prompt = ChatPromptTemplate(messages=[
SystemMessagePromptTemplate.from_template("Write code to solve the users problem. the last line of the python program should print the answer. Do not use sympy"),
HumanMessagePromptTemplate.from_template(f"What is the {n}th prime"),
])
class PythonExecutionEnvironment(BaseModel):
valid_python: str
code_explanation: str
python_repl = {"name": "python_repl", "parameters": PythonExecutionEnvironment.model_json_schema()}
chain = prompt | model.bind(
function_call={"name": python_repl["name"]}, functions=[python_repl]
) | JsonOutputFunctionsParser2()
response = chain.invoke({})
```
The reason is gpt-4 is returning control charectors in it's python code, which is making the parser throw:
https://github.com/langchain-ai/langchain/blob/50b8f4dcc722eb2ec5eccb17c25f1d3895442caa/libs/langchain/langchain/output_parsers/openai_functions.py#L44
The following resolves, but I'm not sure if there are other consequences:
```python
return json.loads(function_call_info, strict=False)
```
More details from Stack Overflow
* https://stackoverflow.com/questions/22394235/invalid-control-character-with-python-json-loads | langchain.schema.output_parser.OutputParserException: Could not parse function call data: Invalid control character Using JsonOutputFunctionParser | https://api.github.com/repos/langchain-ai/langchain/issues/9460/comments | 5 | 2023-08-18T16:52:56Z | 2024-01-11T07:32:13Z | https://github.com/langchain-ai/langchain/issues/9460 | 1,857,013,821 | 9,460 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am playing around with tools in LangChain, but I am running into an issue where the output of the model is not a `FunctionMessage` type even though the LLM is making a function call. For example in this code below
```python
# Import libraries
from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage, SystemMessage, AIMessage, FunctionMessage
from langchain.tools import tool, format_tool_to_openai_function
# Initialize the chat model
model = ChatOpenAI(model="gpt-3.5-turbo-0613", temperature=0)
# Define tools
@tool
def myfunc(x):
"""Calculate myfunc(x)."""
return x ** 0.5
@tool
def myotherfunc(x):
"""Calculate myotherfunc(x)."""
return x ** 2
tools = [myfunc]
# Define your messages
messages = [
SystemMessage(content="You are a helpful AI who can calculate special functions using provided functions."),
HumanMessage(content="What is func(4)?")
]
# Call the predict_messages method
functions = [format_tool_to_openai_function(t) for t in tools]
response = model.predict_messages(messages, functions=functions)
# Print the assistant's reply
print(response)
```
`response` is an AI message
```
AIMessage(content='', additional_kwargs={'function_call': {'name': 'myfunc', 'arguments': '{\n "x": 4\n}'}}, example=False)
```
But I believe it should be a FunctionMessage.
Secondly, there is no clear way to turn an `AIMessage` of this type into a FunctionMessage to pass downstream. I have tried
```python
FunctionMessage(content='', **response.additional_kwargs)
FunctionMessage(content='', additional_kwargs=response.additional_kwargs)
FunctionMessage(content='', additional_kwargs=str(response.additional_kwargs))
```
but each of these attempts gives an error.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run the code provided above.
### Expected behavior
The output should be a `FunctionMessage`. | OpenAI function call is not a FunctionMessage type | https://api.github.com/repos/langchain-ai/langchain/issues/9457/comments | 4 | 2023-08-18T15:54:11Z | 2023-08-18T22:23:26Z | https://github.com/langchain-ai/langchain/issues/9457 | 1,856,945,358 | 9,457 |
[
"hwchase17",
"langchain"
]
| ### System Info
I got this error from on my office laptop
OS: Win 10
I checked AzureAI key, url, deployment and model names. No problem about that.
This guy having same issue with me : https://stackoverflow.com/questions/76750207/azureopenai-not-available-in-langchain
I deleted "import openai" part but no change.
Could somebody pls help me?
```
from dotenv import load_dotenv
from langchain.llms import AzureOpenAI
from langchain.embeddings import OpenAIEmbeddings
import openai
import os
# Load environment variables
load_dotenv()
# Configure Azure OpenAI Service API
openai.api_type = "azure"
openai.api_version = "2022-12-01"
openai.api_base = os.getenv('OPENAI_API_BASE')
openai.api_key = os.getenv("OPENAI_API_KEY")
# Create a completion
llm = AzureOpenAI(deployment_name="text-davinci-003", model_name="text-davinci-003")
joke = llm("Tell me a dad joke")
print(joke)
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from dotenv import load_dotenv
from langchain.llms import AzureOpenAI
from langchain.embeddings import OpenAIEmbeddings
import openai
import os
# Load environment variables
load_dotenv()
# Configure Azure OpenAI Service API
openai.api_type = "azure"
openai.api_version = "2022-12-01"
openai.api_base = os.getenv('OPENAI_API_BASE')
openai.api_key = os.getenv("OPENAI_API_KEY")
### Expected behavior
from dotenv import load_dotenv
from langchain.llms import AzureOpenAI
from langchain.embeddings import OpenAIEmbeddings
import openai
import os
| cannot import name 'AzureOpenAI' from 'langchain.llms' | https://api.github.com/repos/langchain-ai/langchain/issues/9453/comments | 5 | 2023-08-18T15:19:17Z | 2024-02-13T16:13:37Z | https://github.com/langchain-ai/langchain/issues/9453 | 1,856,898,665 | 9,453 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hello!
I wrote a code that is very similar to this one
https://python.langchain.com/docs/integrations/toolkits/document_comparison_toolkit
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
class DocumentInput(BaseModel):
question: str = Field()
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")
tools = []
files = [
# https://abc.xyz/investor/static/pdf/2023Q1_alphabet_earnings_release.pdf
{
"name": "alphabet-earnings",
"path": "/Users/harrisonchase/Downloads/2023Q1_alphabet_earnings_release.pdf",
},
# https://digitalassets.tesla.com/tesla-contents/image/upload/IR/TSLA-Q1-2023-Update
{
"name": "tesla-earnings",
"path": "/Users/harrisonchase/Downloads/TSLA-Q1-2023-Update.pdf",
},
]
for file in files:
loader = PyPDFLoader(file["path"])
pages = loader.load_and_split()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(pages)
embeddings = OpenAIEmbeddings()
retriever = FAISS.from_documents(docs, embeddings).as_retriever()
# Wrap retrievers in a Tool
tools.append(
Tool(
args_schema=DocumentInput,
name=file["name"],
description=f"useful when you want to answer questions about {file['name']}",
func=RetrievalQA.from_chain_type(llm=llm, retriever=retriever),
)
)
llm = ChatOpenAI(
temperature=0,
model="gpt-3.5-turbo-0613",
)
agent = initialize_agent(
agent=AgentType.OPENAI_FUNCTIONS,
tools=tools,
llm=llm,
verbose=True,
)
agent({"input": "did alphabet or tesla have more revenue?"})
### Expected behavior
It's working so far, but when I directly ask the AI to compare the documents mentioning them by name,
the AI does not know what I am talking about!
I first have to ask questions specificly about each document and seperatly mentioning their names, then the llm can compare them and answer my questions.
Any idea to solve this issue?
I would appreaciate any help!:)
cheers | Aksing about uploaded documents only works when I first ask the AI on a specific one | https://api.github.com/repos/langchain-ai/langchain/issues/9451/comments | 2 | 2023-08-18T14:23:13Z | 2023-11-25T16:06:54Z | https://github.com/langchain-ai/langchain/issues/9451 | 1,856,811,221 | 9,451 |
[
"hwchase17",
"langchain"
]
| ### Feature request
For the chains in libs/langchain/langchain/chains/summarize, it might be useful to add to the prompt something like:
"Try to limit the summary to {length} words", and set the length as an input parameter. So that we can have some influence over the length of the output summary.
I know that most LLMs are not good at counting words, maybe we should use token instead of words?
### Motivation
When doing summarization, we sometimes want summaries of different length. For example, we might fit the summary into some system with length limit, or sometimes we want a longer summary so we can include more details.
### Your contribution
I can submit a pull request, but I'm not sure if people believe the proposal is a good idea or not. | Add some control over the summary length for summarize chains | https://api.github.com/repos/langchain-ai/langchain/issues/9449/comments | 4 | 2023-08-18T13:49:38Z | 2024-04-17T05:59:06Z | https://github.com/langchain-ai/langchain/issues/9449 | 1,856,760,404 | 9,449 |
[
"hwchase17",
"langchain"
]
| ### System Info
I use Python 3.10.12 and langchain 0.0.262.
Using a chroma vector database within langchain, I encounter different behaviors between the .get() and .peek() methods. Specifically, when I use .get(), the embeddings field appears as None. However, when I use .peek(), the embeddings field is complete.
Here are the codes I have and their results:
Using db._collection.peek(1):
```
{'ids': ['/path/to/document.pdf_0'],
'embeddings': [[-0.013034389354288578,
0.004974348470568657, ...]],
'metadatas': [{'index': 0,
'page': 0,
'source': '/path/to/document.pdf'}],
'documents': ['Document content here...']}
```
Using db._collection.get(ids='/path/to/document.pdf_0'):
```
{'ids': ['/path/to/document.pdf_0'],
'embeddings': None,
'metadatas': [{'index': 0,
'page': 0,
'source': '/path/to/document.pdf'}],
'documents': ['Document content here...']}
```
The issue seems to be that the embeddings are missing when using the .get() method, but they are present when using the .peek() method. Could you please help me understand why this inconsistency occurs?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a Chroma database with custom metadata and OpenAI embeddings.
2. Filter on just the first source with .get() so you can see the first k documents
3. Peek the same k documents with .peek()
### Expected behavior
The .get() method should return the embeddings too. | Inconsistent Embedding Field Behavior Between .get() and .peek() Methods in Vector Database within Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/9442/comments | 1 | 2023-08-18T09:24:20Z | 2023-08-18T09:36:04Z | https://github.com/langchain-ai/langchain/issues/9442 | 1,856,376,828 | 9,442 |
[
"hwchase17",
"langchain"
]
| ### System Info
### Environment:
Google COLAB project. When running `!cat /etc/os-release` it prints:
```
PRETTY_NAME="Ubuntu 22.04.2 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.2 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
```
### Python Environment
`!python -V`:
```
Python 3.10.12
```
`!pip show langchain`:
```
Name: langchain
Version: 0.0.267
Summary: Building applications with LLMs through composability
Home-page: https://www.github.com/hwchase17/langchain
Author:
Author-email:
License: MIT
Location: /usr/local/lib/python3.10/dist-packages
Requires: aiohttp, async-timeout, dataclasses-json, langsmith, numexpr, numpy, openapi-schema-pydantic, pydantic, PyYAML, requests, SQLAlchemy, tenacity
Required-by:
```
### Who can help?
@hwchase17
### Information
- [x] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a blank Google Colab Project (https://colab.research.google.com/#create=true) then add a code block and paste in this code:
```py
!pip install langchain==0.0.267
# or try just '!pip install langchain' without the explicit version
from pydantic import BaseModel, Field
class InputArgsSchema(BaseModel):
strarg: str = Field(description="The string argument for this tool")
# THIS WORKS:
from typing import Type
class Foo(BaseModel):
my_base_model_subclass: Type[BaseModel] = Field(..., description="Equivalent to the args_schema field in langchain/StructuredTool")
my_foo = Foo(
my_base_model_subclass=InputArgsSchema
)
print(f"My foo {my_foo} is successfully instantiated")
# BUT THIS DOES NOT:
from langchain.tools import StructuredTool
def my_tool_impl(strarg: str):
print(f"Called myTool with strarg={strarg}")
my_tool = StructuredTool(
name="my_tool",
description="A demo tool for testing purposes",
args_schema=InputArgsSchema,
func=my_tool_impl
)
```
Now run the code block.
### Expected behavior
The `StructuredTool` instance should be instantiated without an exception and the `InputArgsSchema` should be accepted as argument for `args_schema`. | StructuredTool raises an error when instantiated | https://api.github.com/repos/langchain-ai/langchain/issues/9441/comments | 9 | 2023-08-18T08:45:10Z | 2024-01-23T06:13:17Z | https://github.com/langchain-ai/langchain/issues/9441 | 1,856,316,147 | 9,441 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain 0.0.267, Python 3.10, Poetry virtualenv, Pop_OS 22.04
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains import TransformChain
def transform(inputs):
return {
'output': {
'structured': {
'report': 'Done'
}
}
}
async def atransform(inputs): return transform(inputs)
chain = TransformChain(
input_variables=['text'],
output_variables=['output'],
transform=transform, # <-- type error below happens here
atransform=atransform, # <-- If I remove this, I get: Argument missing for parameter "atransform"
)
```
```
Argument of type "(inputs: Unknown) -> dict[str, dict[str, dict[str, str]]]" cannot be assigned to parameter "transform" of type "(Dict[str, str]) -> Dict[str, str]" in function "__init__"
Type "(inputs: Unknown) -> dict[str, dict[str, dict[str, str]]]" cannot be assigned to type "(Dict[str, str]) -> Dict[str, str]"
Function return type "dict[str, dict[str, dict[str, str]]]" is incompatible with type "Dict[str, str]"
"dict[str, dict[str, dict[str, str]]]" is incompatible with "Dict[str, str]"
Type parameter "_VT@dict" is invariant, but "dict[str, dict[str, str]]" is not the same as "str"
```
### Expected behavior
Two type checking issues since 0.0.267:
- `transform` should probably take/return `dict[str, Any]` instead of `dict[str, str]` ([like `atransform` does](https://github.com/langchain-ai/langchain/blob/0689628489967785f3a11a9f29d8f6f90930f4f4/libs/langchain/langchain/chains/transform.py#L31C1-L35C40))
- `atransfrom` should probably not be mandatory
Most other code seems to use `Dict[str, Any]`, and non-string values seem to work just fine.
Prior to `0.0.267` the type system didn't enforce the type of `transform` when constructing `TransformChain`. It also started requiring an `atransform` parameter even though it seems to be intended to be optional. | TransformChain wrong function types, enforced since 0.0.267 | https://api.github.com/repos/langchain-ai/langchain/issues/9440/comments | 5 | 2023-08-18T08:41:31Z | 2023-11-25T16:06:59Z | https://github.com/langchain-ai/langchain/issues/9440 | 1,856,311,063 | 9,440 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi everyone, I'm trying to deploy and use langsmith locally.
I deployed in a docker container using
```
langsmith start --expose --openai-api-key=<my azure OpenAi key>
```
the docker container looks good

I opened all the used ports to avoid any problem there, I'm running langsmith in a remote computer
I set up the environment variables
LANGCHAIN_TRACING_V2=true
LANGCHAIN_ENDPOINT=https://cc23-20-79-217-xxx.ngrok.io
LANGCHAIN_API_KEY=<my key>
but the interface is not loading the projects

when I try to access the langsmith endpoint it returns
```
{
"detail": "Not Found"
}
```
using the chat example that appears in this repo
https://github.com/langchain-ai/langsmith-cookbook/tree/main/feedback-examples/streamlit
I can see in the endpoint https://cc23-20-79-217-xxx.ngrok.io that the runs are being tracked, but I can't see them in the frontend
**debugging the front end it is failing trying to fetch the tenants, it's trying to fetch them from http://127.0.0.1:1984/tenants while if I'm not understanding it wrong it should get them from http://20.79.217.xxx:1984/tenants**

could it be a problem with the Azure OpenAI? or did I do something wrong with the installation?
Thanks in advance
### Suggestion:
_No response_ | Langsmith expose is not working Azure OpenAI services | https://api.github.com/repos/langchain-ai/langchain/issues/9438/comments | 4 | 2023-08-18T08:06:33Z | 2024-01-30T00:45:35Z | https://github.com/langchain-ai/langchain/issues/9438 | 1,856,263,823 | 9,438 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version: 0.0.267
### Who can help?
@hwaking @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
-------------------------- This is the begin of python script ---------------------------------------
# import package
from langchain import LLMMathChain
from langchain.vectorstores.redis import Redis
from langchain.utilities import BingSearchAPIWrapper
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
from langchain.chat_models import AzureChatOpenAI
import dotenv
from langchain.memory import ConversationBufferMemory
memory = ConversationBufferMemory(memory_key="chat_history")
dotenv.load_dotenv()
# init agent
llm = AzureChatOpenAI(deployment_name="OpenAImodel")
search = BingSearchAPIWrapper(k=10)
llm_math_chain = LLMMathChain.from_llm(llm=llm)
tools = [
Tool(
name = "Search",
func=search.run,
description="useful for when you need to answer questions about current events. You should ask targeted questions."
),
Tool(
name="Calculator",
func=llm_math_chain.run,
description="useful for when you need to answer questions about math"
),
]
agent = initialize_agent(tools, llm,agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)
# ask some question, at random point you will get error: `Could not parse LLM output`
agent.run(input="幫我查詢現在台北的溫度,用攝氏單位")
agent.run("用中文告訴我: 研華科技冰水主機的哪些組件或相關設備可能造成趨近溫度過高,最後用中文告訴我")
agent.run("用中文回答: 幫我查詢幫浦馬達故障通常是因為什麼原因?")
agent.run("冰水主機的哪些組件或相關設備可能造成趨近溫度過高")
agent.run("用中文回答剛剛的問題")
-------------------------- This is the end of python script ---------------------------------------
### Expected behavior
# at random point you will get error: `Could not parse LLM output`
File ~/miniconda3/envs/langc/lib/python3.8/site-packages/langchain/agents/conversational/output_parser.py:26, in ConvoOutputParser.parse(self, text)
24 match = re.search(regex, text)
25 if not match:
---> 26 raise OutputParserException(f"Could not parse LLM output: `{text}`")
27 action = match.group(1)
28 action_input = match.group(2)
OutputParserException: Could not parse LLM output: `在研華科技的冰水主機中,可能造成趨近溫度過高的組件或相關設備包括:風扇故障、散熱片堵塞、水泵故障、循環水管堵塞、或是防凍液不足等。建議您確保這些組件和設備處於良好狀態,以確保冰水主機的正常運作。`
# my observation
i found that the error always comes when the function `parse` of ConvoOutputParser is called
the text might not contain some keywords such as `Action:` or whatever
so the regular expression will not match the LLM output
good luck!!
| ConvoOutputParser randomly fails to parse LLM output | https://api.github.com/repos/langchain-ai/langchain/issues/9436/comments | 2 | 2023-08-18T07:23:52Z | 2023-11-24T16:06:34Z | https://github.com/langchain-ai/langchain/issues/9436 | 1,856,204,649 | 9,436 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello, I'm trying to run langsmith locally, together with Azure OpenAi services.
I started langsmith in docker and it looks like below

And I'm running the chatbot from here https://github.com/langchain-ai/langsmith-cookbook/blob/main/feedback-examples/streamlit/README.md
Looks like the app is not allowed to send the interaction to langsmith.
```
File "/home/azureuser/anaconda3/envs/snowflake/lib/python3.8/site-packages/langsmith/utils.py", line 55, in raise_for_status_with_text
response.raise_for_status()
File "/home/azureuser/anaconda3/envs/snowflake/lib/python3.8/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 405 Client Error: Not Allowed for url: http://20.79.217.xxx/runs/3abf1f4c-2b48-4d1b-9628-979f7d12d9d5/share
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/azureuser/anaconda3/envs/snowflake/lib/python3.8/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "/home/azureuser/langsmith/main.py", line 126, in <module>
url = client.share_run(run.id)
File "/home/azureuser/anaconda3/envs/snowflake/lib/python3.8/site-packages/langsmith/client.py", line 831, in share_run
raise_for_status_with_text(response)
File "/home/azureuser/anaconda3/envs/snowflake/lib/python3.8/site-packages/langsmith/utils.py", line 57, in raise_for_status_with_text
raise ValueError(response.text) from e
ValueError: <html>
<head><title>405 Not Allowed</title></head>
<body>
<center><h1>405 Not Allowed</h1></center>
<hr><center>nginx/1.24.0</center>
</body>
</html>
```
Also something that looks a bit odd is when I open the website, it's not showing me the projects

and i can't create a new one, keeps telling me that the field is required

what could I do, any hint is more than appreaciated
### Suggestion:
_No response_ | Running Langsmith locally not working | https://api.github.com/repos/langchain-ai/langchain/issues/9435/comments | 1 | 2023-08-18T07:11:45Z | 2023-08-18T07:57:29Z | https://github.com/langchain-ai/langchain/issues/9435 | 1,856,187,220 | 9,435 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain Version: 0.0.267
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`llm = AzureChatOpenAI(deployment_name='gpt-4', temperature=0.0)`
`llm.dict()`
`{'model': 'gpt-3.5-turbo',
'request_timeout': None,
'max_tokens': None,
'stream': False,
'n': 1,
'temperature': 0.0,
'engine': 'gpt-4',
'_type': 'azure-openai-chat'}`
### Expected behavior
The model name and the engine should match. | While using a GPT-4 deployed on AzureOpenAI, model name is showing as 'gpt-3.5-turbo', while the engine is showing as 'gpt-4' | https://api.github.com/repos/langchain-ai/langchain/issues/9434/comments | 3 | 2023-08-18T06:36:12Z | 2023-12-20T16:06:41Z | https://github.com/langchain-ai/langchain/issues/9434 | 1,856,141,970 | 9,434 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I have a JSONL file that has the format:
{
"question": ...
"answer": ...
}
I only want to use "question" for "page_content" to retrieve, because if I merge "question" and "answer" into "page_content", it will be noise to retrieve.
I have an idea that after retrieving:
- I will replace "page_content" with "answer" to give it to LLM.
- Or customize function as_retrieval() to return "answer " in metadata.
- Or the new format data?
So how to make it? Thank you
### Suggestion:
_No response_ | Issue: How to replace "page_content" after "retrieval" | https://api.github.com/repos/langchain-ai/langchain/issues/9429/comments | 4 | 2023-08-18T04:04:37Z | 2023-11-24T16:06:39Z | https://github.com/langchain-ai/langchain/issues/9429 | 1,856,013,098 | 9,429 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
template = """Answer the following questions as best you can. You have access to the following tools:
{tools}
{chat_history}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: {input}
{agent_scratchpad}
"""
tool_names = [tool.name for tool in tools]
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=output_parser,
stop=["\nObservation:"],
allowed_tools=tool_names,
)
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent,
tools=tools,
verbose=True,
memory = memory,
system_message = system_message,
agent_kwargs = agent_kwargs,
handle_parsing_errors=True,
)
template = """
after answering each user questions you MUST append a mutiple follow up question whose purpose it get the user to engage more with you based on the topic so that the conversation can keep flowing
Instructions
'''
1. Your task is to engage with the user and make the user take a trivia quiz alongside the chat . You will start by asking in the beginning ```Before we start, Do you want to play a trivia and get a chance to enter a draw to win a prize?``` If user gives a positive response
then, you should take a quiz with 5 questions with 4 options each . Quiz should be taken from the following question bank {question_bank}.If the user says no you should ask randomly in between ```Do you want to play a trivia and get a chance to enter a draw to win a prize?``` If user gives a positive response
then, you should take a quiz with 5 questions with 4 options each . Quiz should be taken from the following question bank {question_bank}. If the user says no and hasnt responded positively or a yes ever to take the quiz only and only then in the end you should ask the user ```Do you want to play a trivia and get a chance to enter a draw to win a prize?``` If user gives a positive response
then, you should take a quiz with 5 questions with 4 options each . Quiz should be taken from the following question bank {question_bank}.
'''
Above Instructions should be strictly followed
""".format(question_bank = questions)
This is the following peice of code and I need to make this prompt in such as way that it would be able to take quiz whenever it is asked to take quiz.
I need to pass system message in this so that it follows the provided instructions
This is the Link of the source code:-
https://python.langchain.com/docs/modules/agents/how_to/custom_agent_with_tool_retrieval
### Suggestion:
I tried to add the template as a system message but it was not able to follow the instructions
tool_names = [tool.name for tool in tools]
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=output_parser,
stop=["\nObservation:"],
allowed_tools=tool_names,
)
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent,
tools=tools,
verbose=True,
memory = memory,
system_message = system_message,
agent_kwargs = agent_kwargs,
handle_parsing_errors=True,
)
But this was not working
Source code link:-
https://python.langchain.com/docs/modules/agents/how_to/custom_agent_with_tool_retrieval | Add System message in the LLMSingleActionAgent | https://api.github.com/repos/langchain-ai/langchain/issues/9427/comments | 1 | 2023-08-18T02:54:56Z | 2023-11-24T16:06:44Z | https://github.com/langchain-ai/langchain/issues/9427 | 1,855,964,090 | 9,427 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.