issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
]
| ### System Info
Hey! Below is the code i'm using
```
llm_name = "gpt-3.5-turbo"
# llm_name = "gpt-4"
os.environ["OPENAI_API_KEY"] = ""
st.set_page_config(layout="wide")
def load_db(file_path, chain_type, k):
loader = PyPDFLoader(file_path)
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=300)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = DocArrayInMemorySearch.from_documents(docs, embeddings)
retriever = db.as_retriever(search_type="similarity", search_kwargs={"k": k})
qa = ConversationalRetrievalChain.from_llm(
llm=ChatOpenAI(model_name=llm_name, temperature=1),
chain_type=chain_type,
retriever=retriever,
return_source_documents=False,
return_generated_question=False
)
return qa
```
Even though i'm using RecursiveCharacterTextSplitter function, it is returning below error.
`InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 5822 tokens. Please reduce the length of the messages.`
Is there anything which will fix this issue?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
-
### Expected behavior
I'm using RecursiveCharacterTextSplitter function where it'll make use of the function and will exceed the context length. It should work right? | InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 5822 tokens. Please reduce the length of the messages. | https://api.github.com/repos/langchain-ai/langchain/issues/7411/comments | 3 | 2023-07-08T19:08:53Z | 2023-11-15T16:07:44Z | https://github.com/langchain-ai/langchain/issues/7411 | 1,795,068,803 | 7,411 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I'm trying to create a Q&A application, where i'm using Vicuna and it's taking lot of time to return the response. Below is the code
```
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import DocArrayInMemorySearch
from langchain.chains import ConversationalRetrievalChain
from langchain.chat_models import ChatOpenAI
from langchain.document_loaders import PyPDFLoader
from langchain.embeddings import HuggingFaceEmbeddings
import llama_cpp
from run_localGPT import load_model
def load_db(file, chain_type, k):
# load documents
loader = PyPDFLoader(file)
documents = loader.load()
# split documents
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=150)
docs = text_splitter.split_documents(documents)
# define embedding
embeddings = HuggingFaceEmbeddings()
# create vector database from data
db = DocArrayInMemorySearch.from_documents(docs, embeddings)
# define retriever
retriever = db.as_retriever(search_type="similarity", search_kwargs={"k": k})
# create a chatbot chain. Memory is managed externally.
qa = ConversationalRetrievalChain.from_llm(
llm=load_model(model_id="TheBloke/Wizard-Vicuna-13B-Uncensored-GGML", device_type="mps", model_basename="Wizard-Vicuna-13B-Uncensored.ggmlv3.q2_K.bin"),#ChatOpenAI(model_name=llm_name, temperature=0),
chain_type=chain_type,
retriever=retriever,
return_source_documents=True,
return_generated_question=True,
)
return qa
```
I'm using Vicuna-13b model and hugging face embeddings. What i thought is that it'd be much better if i use hugging face embeddings and any benchmark q&a model, so that the return time will be less. Is there any to load normal models like distilbert, roberta or distilbert-base-uncased-distilled-squad etc?
### Motivation
To utilize the benchmark models for better response time.
### Your contribution
- | Adding function to utilize normal models like distilbert, roberta etc | https://api.github.com/repos/langchain-ai/langchain/issues/7406/comments | 1 | 2023-07-08T16:21:18Z | 2023-10-14T20:09:52Z | https://github.com/langchain-ai/langchain/issues/7406 | 1,795,005,669 | 7,406 |
[
"hwchase17",
"langchain"
]
| Hi: I'm trying to merge a list of `langchain.vectorstores.FAISS` objects to create a new (merged) vectorstore, but I still need the original (pre-merge) vectorstores intact. I can use `x.merge_from(y)` which works great:
`merged_stores = reduce(lambda x, y: (z := x).merge_from(y) or z, stores)
`
but that modifies x in place, so my original list of vectorstores ends up with its first store containing a merge with all other elements of the list: which is not what I want. So I tried using `deepcopy()` to make a temporary copy of the vectorstore I'm merging into:
`merged_stores = reduce(lambda x, y: (z := deepcopy(x)).merge_from(y) or z, stores)
`
which does exactly what I want. However, I now find that when I use a Universal Sentence Encoder embedding in the original list of vectorstores I get an exception from `deepcopy()`:
`TypeError: cannot pickle '_thread.RLock' object`
Is there an obvious way for me to achieve this (non-destructive) merge without adding my own `FAISS.merge_from_as_copy()` method to the `langchain.vectorstores.FAISS` class?
| Trying to merge a list of FAISS vectorstores without modifying the original vectorstores, but deepcopy() fails | https://api.github.com/repos/langchain-ai/langchain/issues/7402/comments | 5 | 2023-07-08T14:05:34Z | 2023-10-21T16:07:25Z | https://github.com/langchain-ai/langchain/issues/7402 | 1,794,954,384 | 7,402 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Add the `name` and `arguments` from `function_call` to `on_llm_new_token` when `streaming=True`.
Now it's getting called with an empty token several times and no way to retrieve the `function_call['arguments']`.
We need to add this to every llm/
### Motivation
I'm streaming my calls (user get's to see the output realtime) to an llm and I've decided to use function calls so I get a structured output. I want to show the user the results (one argument is `message`) but I cannot cause only the tokens from `content` are streamed.
Using the plain openai api I can do this.
### Your contribution
I can contribute a PR to a model and some tests, but I need guidance, as what the API should be. As I understand only `chat_models.openai` should be modified. | Add the `name` and `arguments` from `function_call` to `on_llm_new_token` when `streaming=True` | https://api.github.com/repos/langchain-ai/langchain/issues/7385/comments | 6 | 2023-07-07T23:14:25Z | 2023-09-25T09:45:57Z | https://github.com/langchain-ai/langchain/issues/7385 | 1,794,522,874 | 7,385 |
[
"hwchase17",
"langchain"
]
| ### System Info
> langchain==0.0.227
> langchainplus-sdk==0.0.20
> chromadb==0.3.26
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm sorry, but I don't have the time to carve out a MRE right now. My take is that it's still better to report it than not to.
### Expected behavior
`similarity_search_with_score` returns the distance as expected, but `similarity_search_with_relevance_scores` gives the same values, so that the closest distances return the smallest values, even though the output of the latter function is supposed to be higher for vectors that are closer:
> `similarity_search_with_relevance_scores`
> Return docs and relevance scores in the range [0, 1].
> 0 is dissimilar, 1 is most similar. | ChromaDB score goes the wrong way | https://api.github.com/repos/langchain-ai/langchain/issues/7384/comments | 6 | 2023-07-07T23:11:04Z | 2024-02-09T16:25:58Z | https://github.com/langchain-ai/langchain/issues/7384 | 1,794,520,863 | 7,384 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I want to pass arguments to the retriever used in the `VectorStoreIndexWrapper` `query` and `query_with_sources` methods. Right now, those methods don't have any means of passing `VectorStoreRetriever` arguments into `vectorstore.as_retriever()`:
```
langchain/indexes/vectorstore.py
def query(
self, question: str, llm: Optional[BaseLanguageModel] = None, **kwargs: Any
) -> str:
"""Query the vectorstore."""
llm = llm or OpenAI(temperature=0)
chain = RetrievalQA.from_chain_type(
--->llm, retriever=self.vectorstore.as_retriever(), **kwargs
)
return chain.run(question)
def query_with_sources(
self, question: str, llm: Optional[BaseLanguageModel] = None, **kwargs: Any
) -> dict:
"""Query the vectorstore and get back sources."""
llm = llm or OpenAI(temperature=0)
chain = RetrievalQAWithSourcesChain.from_chain_type(
--->llm, retriever=self.vectorstore.as_retriever(), **kwargs
)
return chain({chain.question_key: question})
```
### Motivation
I can't input `VectorStoreRetriever` arguments such as `search_type` or `search_kwargs` into `VectorStoreIndexWrapper.query()`, but I would be able to do that via `VectorStore.as_retriever()`, which `query()` and `query_with_sources()` use anyway.
### Your contribution
If someone isn't already working on this, I can make this change and submit a PR. | Passing VectorStoreRetriever arguments to VectorStoreIndexWrapper.query() | https://api.github.com/repos/langchain-ai/langchain/issues/7376/comments | 1 | 2023-07-07T20:39:29Z | 2023-10-14T20:09:57Z | https://github.com/langchain-ai/langchain/issues/7376 | 1,794,271,625 | 7,376 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Below is the code which i'm using for another model
```
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import DocArrayInMemorySearch
from langchain.chains import ConversationalRetrievalChain
from langchain.chat_models import ChatOpenAI
from langchain.document_loaders import PyPDFLoader
from langchain.embeddings import HuggingFaceEmbeddings
import llama_cpp
# create a chatbot chain. Memory is managed externally.
qa = ConversationalRetrievalChain.from_llm(
llm=load_model(model_id="TheBloke/orca_mini_v2_7B-GGML", device_type="mps", model_basename="orca-mini-v2_7b.ggmlv3.q4_0.bin"),#ChatOpenAI(model_name=llm_name, temperature=0),
chain_type=chain_type,
retriever=retriever,
return_source_documents=True,
return_generated_question=True,
)
```
I'm loading orca mini model with model_id and model_basename. How to load Vicuna7b, Vicuna13b and Falcon LLM's? And how to change the device type from mps to Cuda?
### Idea or request for content:
_No response_ | How to load Vicuna-7b, Vicuna13-b and Falcon LLM's from langchain through ConversationalRetrievalChain function? | https://api.github.com/repos/langchain-ai/langchain/issues/7374/comments | 0 | 2023-07-07T20:17:44Z | 2023-07-07T20:26:58Z | https://github.com/langchain-ai/langchain/issues/7374 | 1,794,215,961 | 7,374 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
@hwchase17 There's been a reporter from Fortune who are in their words: "launching our A.I. 50 list later this month. LangChain is being considered for the list, and while I've tried to get in touch with Harrison and Ankush in as many ways as possible, I haven't been able to."
They've been trying to get in touch with @hwchase17 for weeks now.
Search for **bdanweiss** on the Discord group.
### Suggestion:
_No response_ | Fortune Reporter trying to get in touch with Chase | https://api.github.com/repos/langchain-ai/langchain/issues/7373/comments | 1 | 2023-07-07T20:07:36Z | 2023-07-10T16:26:16Z | https://github.com/langchain-ai/langchain/issues/7373 | 1,794,191,212 | 7,373 |
[
"hwchase17",
"langchain"
]
| ### System Info
from typing_extensions import Protocol
from langchain.llms import OpenAI
llm = OpenAI(model_name='text-davinci-003', temperature=0.7, max_tokens=512)
print(llm)
-----------------------------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-28-c0d04f63c5e1> in <module>
1 get_ipython().system('pip install typing-extensions==4.3.0')
2 from typing_extensions import Protocol
----> 3 from langchain.llms import OpenAI
4 llm = OpenAI(model_name='text-davinci-003', temperature=0.7, max_tokens=512)
5 print(llm)
~\anaconda3\envs\GPTLLM\lib\site-packages\langchain\__init__.py in <module>
1 """Main entrypoint into package."""
2
----> 3 from importlib import metadata
4 from typing import Optional
5
~\anaconda3\envs\GPTLLM\lib\importlib\metadata\__init__.py in <module>
15 import collections
16
---> 17 from . import _adapters, _meta
18 from ._collections import FreezableDefaultDict, Pair
19 from ._functools import method_cache, pass_none
~\anaconda3\envs\GPTLLM\lib\importlib\metadata\_meta.py in <module>
----> 1 from typing import Any, Dict, Iterator, List, Protocol, TypeVar, Union
2
3
4 _T = TypeVar("_T")
5
ImportError: cannot import name 'Protocol'
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.llms import OpenAI
llm = OpenAI(model_name='text-davinci-003', temperature=0.7, max_tokens=512)
print(llm)
### Expected behavior
OpenAI
Params:{model_name, temperature, max_tokens} | from langchain.llms import OpenAI causing ImportError: cannot import name 'Protocol' | https://api.github.com/repos/langchain-ai/langchain/issues/7369/comments | 3 | 2023-07-07T18:30:00Z | 2024-06-08T12:37:41Z | https://github.com/langchain-ai/langchain/issues/7369 | 1,794,024,937 | 7,369 |
[
"hwchase17",
"langchain"
]
| I am using conversational retrieval chain with memory, but I am getting incorrect answers for trivial questions.
Any suggestions what can I do to improve the accuracy of the output?
#memory = ConversationEntityMemory(llm=llm, return_messages=True)
memory=ConversationBufferMemory(memory_key="chat_history",output_key='answer')
chain = ConversationalRetrievalChain.from_llm(
llm = llm,
retriever=retriever,
memory=memory,
get_chat_history=lambda h :h,
return_source_documents=True)

| conversationalRetrievalChain - how to improve accuracy | https://api.github.com/repos/langchain-ai/langchain/issues/7368/comments | 3 | 2023-07-07T18:26:31Z | 2023-10-16T16:06:15Z | https://github.com/langchain-ai/langchain/issues/7368 | 1,794,019,802 | 7,368 |
[
"hwchase17",
"langchain"
]
| ### Feature request
When using the youtube loader. I think it would be useful to take into account the chapters if present.
1. The chapter timecode could be used to know when to chunk. Any chunk inside a chapter timeframe could also contain the same "youtube_chapter_title" metadata.
2. The name of the chapter could added directly inside the transcript. For example as a markdown header. This could be useful for LLM to maintain context over time.
### Motivation
There are useful information present in the youtube chapter title and timecodes that could be of use to LLMs.
Summarizing transcripts would probably be of higher quality if headers are present rather than a huge wall of text.
Adding metadata is always a win.
### Your contribution
Unfortunately not able to help for the time being but wanted to get the idea out there. | use youtube chapter as hints and metadata in the youtube loader | https://api.github.com/repos/langchain-ai/langchain/issues/7366/comments | 13 | 2023-07-07T18:19:36Z | 2024-06-15T18:33:35Z | https://github.com/langchain-ai/langchain/issues/7366 | 1,794,008,740 | 7,366 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python 3.10
Langchain 0.0.226
Windows 11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
import os, langchain
os.environ['SERPAPI_API_KEY'] = ""
os.environ['OPENAI_API_KEY'] = ""
from langchain.chains import LLMChain
from langchain.agents import ConversationalChatAgent, SelfAskWithSearchChain, AgentExecutor
from langchain.memory import ConversationBufferWindowMemory
from langchain.tools import Tool
from langchain.llms import OpenAI
conversation_buffer_window_memory: ConversationBufferWindowMemory = ConversationBufferWindowMemory(
input_key="input", memory_key="chat_history")
search = langchain.SerpAPIChain()
self_ask_and_search = SelfAskWithSearchChain(llm=OpenAI(), search_chain=search, verbose=True)
tools = [
Tool(
name="Search",
func=self_ask_and_search.run,
description="useful for when you need to answer questions about current events",
)
]
prompt = ConversationalChatAgent.create_prompt(
tools, input_variables=["input", "chat_history", "agent_scratchpad"]
)
llm_chain = LLMChain(
llm=OpenAI(temperature=0), prompt=prompt)
agent = ConversationalChatAgent(
llm_chain=llm_chain, tools=tools, verbose=True)
agent_executor = AgentExecutor.from_agent_and_tools(
agent=agent, tools=tools, verbose=True, memory=conversation_buffer_window_memory)
agent_executor.run('what is the capital of texas?')
```
returns the following error:
(<class 'ValueError'>, ValueError('variable chat_history should be a list of base messages, got '), <traceback object at 0x0000027BA6366E40>)
### Expected behavior
Return the LLM result while updating the memory mechanism. | ConversationBufferWindowMemory returns empty string on empty history instead of empty array. | https://api.github.com/repos/langchain-ai/langchain/issues/7365/comments | 7 | 2023-07-07T17:56:12Z | 2023-12-05T12:07:46Z | https://github.com/langchain-ai/langchain/issues/7365 | 1,793,960,528 | 7,365 |
[
"hwchase17",
"langchain"
]
| ### System Info
the provided class
[langchain/langchain/llms/huggingface_endpoint.py](https://github.com/hwchase17/langchain/blob/370becdfc2dea35eab6b56244872001116d24f0b/langchain/llms/huggingface_endpoint.py)
class HuggingFaceEndpoint(LLM):
has a bug.
it should be
```python
if self.task == "text-generation":
# Text generation return includes the starter text.
text = generated_text[0]["generated_text"]
```
not
```python
text = generated_text[0]["generated_text"][len(prompt) :]
```
the current class will likely just return a 0.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
llm= HuggingFaceEndpoint(endpoint_url=os.getenv('ENDPOINT_URL'),task="text-generation",
model_kwargs={"temperature":0.7, "max_length":512})
conversation_chain = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=vectorstore.as_retriever(),
memory=memory
)
```
### Expected behavior
the output is 0 | HuggingFaceEndpoint Class Bug | https://api.github.com/repos/langchain-ai/langchain/issues/7353/comments | 1 | 2023-07-07T14:49:27Z | 2023-07-11T07:06:07Z | https://github.com/langchain-ai/langchain/issues/7353 | 1,793,680,431 | 7,353 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain Version - 0.0.201
Platform - Windows 11
Python - 3.10.11
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. load some text documents to a vector store, i used deeplake
2. load the db
3. call the function, summarizer(db,"Summarize the mentions of google according to their AI program")(defined in attached file)
4. run for chain_type as stuff, it will work, for map_reduce it will fail in retrieval QA Bot
[main.zip](https://github.com/hwchase17/langchain/files/11982265/main.zip)
### Expected behavior
it should work for all the chain types and give results | map_reduce and refine not working with RetrievalQA chain | https://api.github.com/repos/langchain-ai/langchain/issues/7347/comments | 9 | 2023-07-07T13:09:59Z | 2023-11-14T16:08:01Z | https://github.com/langchain-ai/langchain/issues/7347 | 1,793,525,647 | 7,347 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
python environment:
neo4j 5.0.0
neobolt 1.7.17
neo4j version: neo4j-35-apoc:v20220808
报错:
File "D:\Anaconda3\envs\langchain\lib\site-packages\neo4j_sync\io_bolt3.py", line 200, in run
raise ConfigurationError(
neo4j.exceptions.ConfigurationError: Database name parameter for selecting database is not supported in Bolt Protocol Version(3, 0). Database name 'neo4
### Suggestion:
_No response_ | Issue: When implementing Cypher Search using neo4j environment | https://api.github.com/repos/langchain-ai/langchain/issues/7346/comments | 5 | 2023-07-07T13:00:11Z | 2023-10-28T16:05:40Z | https://github.com/langchain-ai/langchain/issues/7346 | 1,793,511,650 | 7,346 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I am looking for loading OPEN API yaml file into vertex ai llm model. Langchain provides that for OPENAI but not for vertex ai. This is how I currently write code in Open AI. I want a similar functionality for Vertex AI
spec = OpenAPISpec.from_file("sample.yaml")
openai_fns, call_api_fn = openapi_spec_to_openai_fn(spec)
### Motivation
I am creating on a GENAI chatbot for my company where customers can ask questions specific to our product. I need to return the answers of those queries using our internal API's. To query those API's, I need to know which API to call and the API parameters filled as per the user query. For that, I need vertex ai function calling support to query it.
Is there already a way in vertex ai which does this? Kindly help me on this.
### Your contribution
I can help on the issue if anything is required. | Load OPEN API yaml file to vertex ai LLM model | https://api.github.com/repos/langchain-ai/langchain/issues/7345/comments | 1 | 2023-07-07T12:36:51Z | 2023-10-14T20:10:17Z | https://github.com/langchain-ai/langchain/issues/7345 | 1,793,476,844 | 7,345 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I want to download a large file, 5,000,000 characters, and I get an error: openai.error.RateLimitError: Rate limit reached for default-text-embedding-ada-002 in organization org-DOqIVFPozlLEOcvlTbpvpcKt on tokens per min. Limit: 150000 / min. Current: 0 / min. Contact us through our help center at help.openai.com if you continue to have issues. Please add a payment method to your account to increase your rate limit. Visit https://platform.openai.com/account/billing to add a payment method.
My code:
`class Agent:
def __init__(self, openai_api_key: str | None = None) -> None:
self.key = openai_api_key
self.embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key, request_timeout=120, max_retries=10)
self.text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
self.llm = ChatOpenAI(temperature=0, openai_api_key=openai_api_key, max_tokens=500, model_name="gpt-3.5-turbo-16k")
self.chat_history = None
self.chain = None
self.db = None
def ask(self, question: str) -> str:
response = self.chain({"question": question, "chat_history": self.chat_history})
response = response["answer"].strip()
self.chat_history.append((question, response))
return response
def ingest(self, file_path) -> None:
loader = TextLoader(file_path, encoding="utf-8")
documents = loader.load()
splitted_documents = self.text_splitter.split_documents(documents)
if self.db is None:
self.db = FAISS.from_documents(splitted_documents, self.embeddings)
self.chain = ConversationalRetrievalChain.from_llm(self.llm, self.db.as_retriever())
self.chat_history = []
else:
self.db.add_documents(splitted_documents)
def forget(self) -> None:
self.db = None
self.chain = None
self.chat_history = None`
Is there a solution to this problem?
### Suggestion:
_No response_ | Issue: openai.error.RateLimitError: Rate limit reached for default-text-embedding-ada-002 in organization | https://api.github.com/repos/langchain-ai/langchain/issues/7343/comments | 20 | 2023-07-07T11:34:51Z | 2024-03-12T03:27:18Z | https://github.com/langchain-ai/langchain/issues/7343 | 1,793,386,363 | 7,343 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Inferring query filters from natural language is powerful. The SelfQuery retriever is a great implementation but is not yet compatible with Elasticseach.
### Motivation
Choosing Elasticsearch as a vector store is interesting in terms of hybrid search.
It also makes sense when you have an established infrastructure and technical expertise.
### Your contribution
cc: @jeffvestal, @derickson | Create a built in translator for SelfQueryRetriever for Elasticsearch | https://api.github.com/repos/langchain-ai/langchain/issues/7341/comments | 2 | 2023-07-07T10:53:34Z | 2023-11-21T16:06:47Z | https://github.com/langchain-ai/langchain/issues/7341 | 1,793,313,012 | 7,341 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain v0.0.225, Ubuntu 22.04.2 LTS, Python 3.10
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm executing [https://github.com/hwchase17/langchain/blob/master/docs/extras/modules/model_io/models/llms/integrations/gpt4all.ipynb](url) code on my local machine, IDE is VSCode
Getting this error - **AttributeError: 'Model' object has no attribute '_ctx'**

### Expected behavior

| langchain + gpt4all | https://api.github.com/repos/langchain-ai/langchain/issues/7340/comments | 6 | 2023-07-07T10:43:13Z | 2023-11-28T16:09:46Z | https://github.com/langchain-ai/langchain/issues/7340 | 1,793,294,393 | 7,340 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello everyone,
I've tried numerous approaches, but every time I attempt to transcribe a video from Google, the Whisper task gets terminated.
Does anyone have any suggestions?
```
def youtube_transcript(query: str) -> str:
# Use the URL of the YouTube video you want to download
youtube_url = [get_input_writing(query)]
# -------------
# Directory to save audio files
save_dir = "/home/ds_logos_2/transcripts"
# Transcribe the videos to text
loader = GenericLoader(YoutubeAudioLoader(youtube_url, save_dir), OpenAIWhisperParser())
docs = loader.load()
# Combine doc
combined_docs = [doc.page_content for doc in docs]
text = " ".join(combined_docs)
# Save the transcription to a text file
with open('/home/davesoma/transcripts/transcript.txt', 'w') as f:
f.write(text)
```
```
[youtube] -hxeDjAxvJ8: Downloading webpage
[youtube] -hxeDjAxvJ8: Downloading ios player API JSON
[youtube] -hxeDjAxvJ8: Downloading android player API JSON
[youtube] -hxeDjAxvJ8: Downloading m3u8 information
[info] -hxeDjAxvJ8: Downloading 1 format(s): 140
[download] Destination: /home/davesoma/ds_logos_2/transcripts/Marc Andreessen: Future of the Internet, Technology, and AI | Lex Fridman Podcast #386.m4a
[download] 100% of 177.41MiB in 00:00:18 at 9.52MiB/s
[FixupM4a] Correcting container of "/home/davesoma/ds_logos_2/transcripts/Marc Andreessen: Future of the Internet, Technology, and AI | Lex Fridman Podcast #386.m4a"
[ExtractAudio] Not converting audio /home/davesoma/ds_logos_2/transcripts/Marc Andreessen: Future of the Internet, Technology, and AI | Lex Fridman Podcast #386.m4a; fil
e is already in target format m4a Transcribing part 1!
Killed
``` | Issue: Whisper terminates YouTube transcriptions. | https://api.github.com/repos/langchain-ai/langchain/issues/7339/comments | 3 | 2023-07-07T10:39:03Z | 2023-10-14T20:10:22Z | https://github.com/langchain-ai/langchain/issues/7339 | 1,793,287,416 | 7,339 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I have been trying to delete custom redis data using langchain. But the search feature only returns the document and the metadata with similarity score. Hence I modified the code to return the document ID along with the document data. The id will be helpful for manual deletion of certain elements.
### Motivation
The retrieval of document ID from the redis semantic search will help me to systematically modify the wrong data with right ones after filtering it using LLMs.
### Your contribution
Yes I have fixed the redis part to retrieve the document ID from semantic search. I will fix the issue if the pull request is allowed. | Add feature to get document ID from redis after redis search document retrieval. | https://api.github.com/repos/langchain-ai/langchain/issues/7338/comments | 2 | 2023-07-07T10:29:51Z | 2023-10-06T16:04:59Z | https://github.com/langchain-ai/langchain/issues/7338 | 1,793,271,619 | 7,338 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am doing the pdf question answering using the below code.
Note: I am integrated the ConversationBufferMemory for keeping my chat in the memory
```
from langchain.document_loaders import PyPDFLoader
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.memory import ConversationBufferMemory
from langchain.vectorstores import Chroma
from langchain.chains import ConversationalRetrievalChain
from langchain.llms import Cohere
import os
os.environ["COHERE_API_KEY"] = "cohere key"
model = Cohere(model="command-xlarge")
embeddings = HuggingFaceEmbeddings(model_name='sentence-transformers/all-MiniLM-L6-v2')
pdf_path = "file.pdf"
loader = PyPDFLoader(pdf_path)
pages = loader.load_and_split()
vectordb = Chroma.from_documents(pages, embedding=embeddings,
persist_directory=".")
vectordb.persist()
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
pdf_qa = ConversationalRetrievalChain.from_llm(model, vectordb.as_retriever(), memory=memory)
while True:
query = input("Enter the question\n")
result = pdf_qa({"question": query})
print("Answer:")
print(result["answer"])
```
Actually what happening here?
I am observing that memory is not keeping updated here, because if I am asking any question related to the previous context, it is unable to answer.
Is my method is correct?
### Suggestion:
_No response_ | Working of ConversationBufferMemory in the context of document based question answering | https://api.github.com/repos/langchain-ai/langchain/issues/7337/comments | 1 | 2023-07-07T10:12:08Z | 2023-10-14T20:10:27Z | https://github.com/langchain-ai/langchain/issues/7337 | 1,793,238,048 | 7,337 |
[
"hwchase17",
"langchain"
]
| I want pass "chat_history" to the agents like :
```python
...
agent = initialize_agent(tools, llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True)
response = agent.run({"input": {"chat_history": _message_to_tuple(history), "question": query}})
```
but got an error. how can i pass 'chat_history' to the agent?
### Suggestion:
_No response_ | Issue: question about agents | https://api.github.com/repos/langchain-ai/langchain/issues/7336/comments | 6 | 2023-07-07T09:49:22Z | 2023-10-19T16:06:33Z | https://github.com/langchain-ai/langchain/issues/7336 | 1,793,196,340 | 7,336 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hello,
I am trying to use AzureChatOpenAI to develop a QA system with memory. For that purpose, I have the following code:
```python
import faiss
import pickle
from langchain.chat_models import AzureChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationalRetrievalChain
from langchain.schema import HumanMessage
memory = ConversationBufferMemory(
memory_key="chat_history",
return_messages=True
)
# Load the LangChain.
index = faiss.read_index("docs.index")
with open("faiss_store.pkl", "rb") as f:
store = pickle.load(f)
AZURE_BASE_URL = "{MY_BASE_URL}.openai.azure.com/"
AZURE_OPENAI_API_KEY = "MY_API_KEY"
DEPLOYMENT_NAME = "chat"
llm = AzureChatOpenAI(
openai_api_base=AZURE_BASE_URL,
openai_api_version="2023-03-15-preview",
deployment_name="chat",
openai_api_key=AZURE_OPENAI_API_KEY,
openai_api_type="azure",
temperature=0.01
)
retriever = store.as_retriever()
qa = ConversationalRetrievalChain.from_llm(
llm,
retriever=retriever,
memory=memory
)
user_input = "Question?"
result = qa({"question": user_input})
print(result)
```
This code is raising the following error:
```
openai.error.InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.
```
I have tried using AzureChatOpenAI in other ways and it is working without any problem, using the same deployed model in Azure:
```python
AZURE_BASE_URL = "{MY_BASE_URL}.openai.azure.com/"
AZURE_OPENAI_API_KEY = "MY_API_KEY"
DEPLOYMENT_NAME = "chat"
llm = AzureChatOpenAI(
openai_api_base=AZURE_BASE_URL,
openai_api_version="2023-03-15-preview",
deployment_name="chat",
openai_api_key=AZURE_OPENAI_API_KEY,
openai_api_type="azure",
temperature=0.01
)
chain = load_qa_chain(llm=llm, chain_type="map_reduce")
```
Therefore, the problem is not about the deployment I made in Azure, it seems to work fine in other situations.
Am I missing something when using ConversationalRetrievalChain with AzureChatOpenAI? I have tried so many things and nothing seems to work.
Thanks in advance for any help.
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Declare the LLM using AzureChatOpenAI:
```python
AZURE_BASE_URL = "{MY_BASE_URL}.openai.azure.com/"
AZURE_OPENAI_API_KEY = "MY_API_KEY"
DEPLOYMENT_NAME = "chat"
llm = AzureChatOpenAI(
openai_api_base=AZURE_BASE_URL,
openai_api_version="2023-03-15-preview",
deployment_name="chat",
openai_api_key=AZURE_OPENAI_API_KEY,
openai_api_type="azure",
temperature=0.01
)
```
2. Declare the ConversationBufferMemory:
```python
memory = ConversationBufferMemory(
memory_key="chat_history",
return_messages=True
)
```
3. Load FAISS index:
```python
# Load the LangChain.
index = faiss.read_index("docs.index")
with open("faiss_store.pkl", "rb") as f:
store = pickle.load(f)
```
4. Declare and run the ConversationalRetrievalChain:
```python
qa = ConversationalRetrievalChain.from_llm(
llm,
retriever=retriever,
memory=memory
)
user_input = "Question?"
result = qa({"question": user_input})
print(result)
```
### Expected behavior
Error raised:
```
openai.error.InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.
``` | AzureChatOpenAI raises The API deployment for this resource does not exist when used with ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/7331/comments | 3 | 2023-07-07T08:41:48Z | 2024-02-20T21:07:36Z | https://github.com/langchain-ai/langchain/issues/7331 | 1,793,084,323 | 7,331 |
[
"hwchase17",
"langchain"
]
| ### System Info
I intend to use the conversation summary buffer memory with ChatOpenAI in a conversation chain. For the chat, there's a need to set the system message to instruct and give appropriate personality to the chat assistant. However, system message is not supported to be inserted in the memory either via the save_context (the documented way) or the memory.chat_memory.message.insert()
The summary of the chat itself seems to use the system message to send the summary. This makes Conversation Summary Buffer Memory incompatible with ChatOpenAI.
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Using either of the two methods to use the system message
1. memory.chat_memory.messages.insert(0, system_message)
2. memory.save_context({"input": SystemMessage(content=system_message)}, {"output": ""})
### Expected behavior
We should be able to use any memory with ChatOpenAI as these need to be modular but ConversationSummaryBufferMemory seems incompatible with it due to system message. | Conversation Summary Buffer Memory does not accept a system message | https://api.github.com/repos/langchain-ai/langchain/issues/7327/comments | 8 | 2023-07-07T07:27:42Z | 2024-07-29T01:54:55Z | https://github.com/langchain-ai/langchain/issues/7327 | 1,792,965,212 | 7,327 |
[
"hwchase17",
"langchain"
]
| I've got a GoogleDriveLoader implemented with a exponential backoff and a sleep function to try further mitigate rate limits, but I still get rate limit errors from Google.
Even though I've added a time.sleep(5) statement, I assume it only takes effect before each attempt to load all the documents, not between individual API calls within the load() method.
```
google_loader = GoogleDriveLoader(
folder_id="xxxxxxxxx",
credentials_path="credentials.json",
token_path="token.json",
file_types=["document", "sheet", "pdf"],
file_loader_cls=UnstructuredFileIOLoader,
recursive=True,
verbose=True,
)
@retry(
stop=stop_after_attempt(7), wait=wait_exponential(multiplier=2, min=60, max=300)
)
def load_documents():
time.sleep(5) # delay for 5 seconds
return google_loader.load()
try:
google_docs = load_documents()
except:
logging.error("Exceeded retry attempts for Google API rate limit.")
raise
```
The exception output:
```
ERROR:root:Exceeded retry attempts for Google API rate limit.
IndexError: list index out of range
```
Stacktrace:
```
DEBUG:googleapiclient.discovery:URL being requested: GET https://sheets.googleapis.com/v4/spreadsheets/xxxxx-xxxxx/values/A_xx?alt=json
DEBUG:googleapiclient.discovery:URL being requested: GET https://sheets.googleapis.com/v4/spreadsheets/xxxxx-xxxxx/values/A_xx?alt=json
DEBUG:googleapiclient.discovery:URL being requested: GET https://sheets.googleapis.com/v4/spreadsheets/xxxxx-xxxxx/values/A_xx?alt=json
DEBUG:googleapiclient.discovery:URL being requested: GET https://sheets.googleapis.com/v4/spreadsheets/xxxxx-xxxxx/values/BF%20IS?alt=json
DEBUG:googleapiclient.discovery:URL being requested: GET https://sheets.googleapis.com/v4/spreadsheets/xxxxx-xxxxx/values/BF%20BS%20?alt=json
DEBUG:googleapiclient.discovery:URL being requested: GET https://sheets.googleapis.com/v4/spreadsheets/xxxxx-xxxxx/values/Reporting%20IS?alt=json
DEBUG:googleapiclient.discovery:URL being requested: GET https://sheets.googleapis.com/v4/spreadsheets/xxxxx-xxxxx/values/Reporting%20BS?alt=json
DEBUG:googleapiclient.discovery:URL being requested: GET https://sheets.googleapis.com/v4/spreadsheets/xxxxx-xxxxx/values/Statistics?alt=json
DEBUG:googleapiclient.discovery:URL being requested: GET https://sheets.googleapis.com/v4/spreadsheets/xxxxx-xxxxx/values/Val_Summary?alt=json
DEBUG:googleapiclient.discovery:URL being requested: GET https://sheets.googleapis.com/v4/spreadsheets/xxxxx-xxxxx/values/Val_Workings?alt=json
ERROR:root:Exceeded retry attempts for Google API rate limit.
Traceback (most recent call last):
File "/Users/chris/Repositories/xxxx/.venv/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "/Users/chris/Repositories/xxxx/ingest.py", line 44, in load_documents
return google_loader.load()
File "/Users/chris/Repositories/xxxx/.venv/lib/python3.10/site-packages/langchain/document_loaders/googledrive.py", line 347, in load
return self._load_documents_from_folder(
File "/Users/chris/Repositories/xxxx/.venv/lib/python3.10/site-packages/langchain/document_loaders/googledrive.py", line 248, in _load_documents_from_folder
returns.extend(self._load_sheet_from_id(file["id"])) # type: ignore
File "/Users/chris/Repositories/xxxx/.venv/lib/python3.10/site-packages/langchain/document_loaders/googledrive.py", line 173, in _load_sheet_from_id
header = values[0]
IndexError: list index out of range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/chris/Repositories/xxxx/ingest.py", line 68, in <module>
ingest_docs()
File "/Users/chris/Repositories/xxxx/ingest.py", line 47, in ingest_docs
google_docs = load_documents()
File "/Users/chris/Repositories/xxxx/.venv/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "/Users/chris/Repositories/xxxx/.venv/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
File "/Users/chris/Repositories/xxxx/.venv/lib/python3.10/site-packages/tenacity/__init__.py", line 326, in iter
raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x142dfab30 state=finished raised IndexError>]
```
### Suggestion:
Due to the recursive function, and the use case for most people being to load a large Drive folder, would it be possible to implement a rate limiter into the loader itself to slow down the individual API calls?
Alternatively, does anyone have any recommendations on how to better implement an exponential backoff? | Issue: Rate limiting on large Google Drive folder | https://api.github.com/repos/langchain-ai/langchain/issues/7325/comments | 3 | 2023-07-07T06:06:08Z | 2023-12-30T16:07:34Z | https://github.com/langchain-ai/langchain/issues/7325 | 1,792,853,218 | 7,325 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I used RecursiveCharacterTextSplitter.from_tiktoken_encoder to split a document, and if I set chunk_size to 2000, OpenAI cannot answer my question by the documents, if I set chunk_size to 500, OpenAI can work very well. I want to know, As a rule of thumb, what is the best size for a chunk
### Suggestion:
_No response_ | What is the best size for a chunk | https://api.github.com/repos/langchain-ai/langchain/issues/7322/comments | 2 | 2023-07-07T05:42:08Z | 2023-10-14T20:10:37Z | https://github.com/langchain-ai/langchain/issues/7322 | 1,792,823,901 | 7,322 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: v0.0.225
Python Version: 3.10
Deployed and running on AWS Lambda deployed with x86_64 architecture.
### Who can help?
@jacoblee93
### Information
- [x] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
def callChatModel(input, token):
print('Executing with input:', input)
llm = ChatOpenAI(model="gpt-3.5-turbo-0613",
temperature=0)`
history = DynamoDBChatMessageHistory(table_name="MemoryPy",
session_id=token)
memory = ConversationBufferWindowMemory(
k=20, memory_key='chat_history', chat_memory=history, input_key="input", return_messages=True)
zapier = ZapierNLAWrapper(zapier_nla_oauth_access_token=token)
zapier_toolkit = ZapierToolkit.from_zapier_nla_wrapper(zapier)
tools = load_tools(["serpapi"], llm=llm) + zapier_toolkit.tools
print(tools)
agent = initialize_agent(
tools, llm,
agent=AgentType.OPENAI_FUNCTIONS,
memory=memory,
verbose=True,
handle_parsing_errors=True,
)
resp = agent.run(input=input)
return resp
Input to the chat model is
`Executing with input: Look up a basic fact about the sun, no more than one sentence. Send this fact to <email>@gmail.com `
Execution logs from CloudWatch:
```
[1m> Entering new chain...[0m
[ERROR] InvalidRequestError: 'Gmail: Send Email' does not match '^[a-zA-Z0-9_-]{1,64}<!--EndFragment-->
</body>
</html> - 'functions.1.name'Traceback (most recent call last): File "/var/task/handler.py", line 152, in handle_chat chatResp = callChatModel(message, token) File "/var/task/handler.py", line 92, in callChatModel resp = agent.run(input=input) File "/var/task/langchain/chains/base.py", line 320, in run return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[ File "/var/task/langchain/chains/base.py", line 181, in __call__ raise e File "/var/task/langchain/chains/base.py", line 175, in __call__ self._call(inputs, run_manager=run_manager) File "/var/task/langchain/agents/agent.py", line 987, in _call next_step_output = self._take_next_step( File "/var/task/langchain/agents/agent.py", line 792, in _take_next_step output = self.agent.plan( File "/var/task/langchain/agents/openai_functions_agent/base.py", line 210, in plan predicted_message = self.llm.predict_messages( File "/var/task/langchain/chat_models/base.py", line 398, in predict_messages return self(messages, stop=_stop, **kwargs) File "/var/task/langchain/chat_models/base.py", line 348, in __call__ generation = self.generate( File "/var/task/langchain/chat_models/base.py", line 124, in generate raise e File "/var/task/langchain/chat_models/base.py", line 114, in generate self._generate_with_cache( File "/var/task/langchain/chat_models/base.py", line 261, in _generate_with_cache return self._generate( File "/var/task/langchain/chat_models/openai.py", line 371, in _generate response = self.completion_with_retry(messages=message_dicts, **params) File "/var/task/langchain/chat_models/openai.py", line 319, in completion_with_retry return _completion_with_retry(**kwargs) File "/var/task/tenacity/__init__.py", line 289, in wrapped_f return self(f, *args, **kw) File "/var/task/tenacity/__init__.py", line 379, in __call__ do = self.iter(retry_state=retry_state) File "/var/task/tenacity/__init__.py", line 314, in iter return fut.result() File "/var/lang/lib/python3.10/concurrent/futures/_base.py", line 451, in result return self.__get_result() File "/var/lang/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result raise self._exception File "/var/task/tenacity/__init__.py", line 382, in __call__ result = fn(*args, **kwargs) File "/var/task/langchain/chat_models/openai.py", line 317, in _completion_with_retry return self.client.create(**kwargs) File "/var/task/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) File "/var/task/openai/api_resources/abstract/engine_api_resource.py", line 153, in create response, _, api_key = requestor.request( File "/var/task/openai/api_requestor.py", line 298, in request resp, got_stream = self._interpret_response(result, stream) File "/var/task/openai/api_requestor.py", line 700, in _interpret_response self._interpret_response_line( File "/var/task/openai/api_requestor.py", line 763, in _interpret_response_line raise self.handle_error_response( | [ERROR] InvalidRequestError: 'Gmail: Send Email' does not match '^[a-zA-Z0-9_-]{1,64}<!--EndFragment-->
</body>
</html> - 'functions.1.name' Traceback (most recent call last): File "/var/task/handler.py", line 152, in handle_chat chatResp = callChatModel(message, token) File "/var/task/handler.py", line 92, in callChatModel resp = agent.run(input=input) File "/var/task/langchain/chains/base.py", line 320, in run return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[ File "/var/task/langchain/chains/base.py", line 181, in __call__ raise e File "/var/task/langchain/chains/base.py", line 175, in __call__ self._call(inputs, run_manager=run_manager) File "/var/task/langchain/agents/agent.py", line 987, in _call next_step_output = self._take_next_step( File "/var/task/langchain/agents/agent.py", line 792, in _take_next_step output = self.agent.plan( File "/var/task/langchain/agents/openai_functions_agent/base.py", line 210, in plan predicted_message = self.llm.predict_messages( File "/var/task/langchain/chat_models/base.py", line 398, in predict_messages return self(messages, stop=_stop, **kwargs) File "/var/task/langchain/chat_models/base.py", line 348, in __call__ generation = self.generate( File "/var/task/langchain/chat_models/base.py", line 124, in generate raise e File "/var/task/langchain/chat_models/base.py", line 114, in generate self._generate_with_cache( File "/var/task/langchain/chat_models/base.py", line 261, in _generate_with_cache return self._generate( File "/var/task/langchain/chat_models/openai.py", line 371, in _generate response = self.completion_with_retry(messages=message_dicts, **params) File "/var/task/langchain/chat_models/openai.py", line 319, in completion_with_retry return _completion_with_retry(**kwargs) File "/var/task/tenacity/__init__.py", line 289, in wrapped_f return self(f, *args, **kw) File "/var/task/tenacity/__init__.py", line 379, in __call__ do = self.iter(retry_state=retry_state) File "/var/task/tenacity/__init__.py", line 314, in iter return fut.result() File "/var/lang/lib/python3.10/concurrent/futures/_base.py", line 451, in result return self.__get_result() File "/var/lang/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result raise self._exception File "/var/task/tenacity/__init__.py", line 382, in __call__ result = fn(*args, **kwargs) File "/var/task/langchain/chat_models/openai.py", line 317, in _completion_with_retry return self.client.create(**kwargs) File "/var/task/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) File "/var/task/openai/api_resources/abstract/engine_api_resource.py", line 153, in create response, _, api_key = requestor.request( File "/var/task/openai/api_requestor.py", line 298, in request resp, got_stream = self._interpret_response(result, stream) File "/var/task/openai/api_requestor.py", line 700, in _interpret_response self._interpret_response_line( File "/var/task/openai/api_requestor.py", line 763, in _interpret_response_line raise self.handle_error_response(
-- | --
```
### Expected behavior
I'd expect the agent chain to just execute | Zapier Toolkit and Function Agents not compatible | https://api.github.com/repos/langchain-ai/langchain/issues/7315/comments | 2 | 2023-07-07T01:38:10Z | 2023-10-14T20:10:42Z | https://github.com/langchain-ai/langchain/issues/7315 | 1,792,553,905 | 7,315 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Python Documentation for Output Parser Unavailable
URL: https://python.langchain.com/docs/modules/prompts/output_parsers.html
### Idea or request for content:
I am currently taking the "Langchain/lesson/2/models, prompts, and parsers" course from deeplearning.ai. While working on the course material, I encountered difficulties with the output parser in Python. To seek assistance and better understand the usage of the output parser, I attempted to access the documentation for the Python implementation. However, I received a "page not found" error when trying to access the Python documentation. | DOC: Broken Python Information Link in Langchain Documentation | https://api.github.com/repos/langchain-ai/langchain/issues/7311/comments | 2 | 2023-07-07T00:02:09Z | 2023-07-07T16:31:22Z | https://github.com/langchain-ai/langchain/issues/7311 | 1,792,485,542 | 7,311 |
[
"hwchase17",
"langchain"
]
| ### System Info
I got error when try to load custom LLM for Llama-Index
```
# setup prompts - specific to StableLM
from llama_index.prompts.prompts import SimpleInputPrompt
system_prompt = """<|SYSTEM|># StableLM Tuned (Alpha version)
- StableLM is a helpful and harmless open-source AI language model developed by StabilityAI.
- StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes.
- StableLM will refuse to participate in anything that could harm a human.
"""
# This will wrap the default prompts that are internal to llama-index
query_wrapper_prompt = SimpleInputPrompt("<|USER|>{query_str}<|ASSISTANT|>")
import torch
llm = HuggingFaceLLM(
context_window=4096,
max_new_tokens=256,
generate_kwargs={"temperature": 0.7, "do_sample": False, "return_dict_in_generate":True},
system_prompt=system_prompt,
query_wrapper_prompt=query_wrapper_prompt,
tokenizer_name="StabilityAI/stablelm-tuned-alpha-3b",
model_name="StabilityAI/stablelm-tuned-alpha-3b",
device_map="auto",
stopping_ids=[50278, 50279, 50277, 1, 0],
tokenizer_kwargs={"max_length": 4096},
)
# load in HF embedding model from langchain
embed_model = LangchainEmbedding(HuggingFaceEmbeddings())
service_context = ServiceContext.from_defaults(chunk_size=1024, llm=llm,embed_model=embed_model)
documents = SimpleDirectoryReader('data\\abnamro').load_data()
index = VectorStoreIndex.from_documents(documents, service_context=service_context,show_progress=True)
from langchain.agents import Tool
tools = [
Tool(
name="LlamaIndex",
func=lambda q: str(index.as_query_engine(
retriever_mode="embedding",
verbose=True,
service_context=service_context
).query(q)),
description="useful for when you want to answer questions about finance. The input to this tool should be a complete english sentence.",
return_direct=True,
),
]
from langchain.chains.conversation.memory import ConversationBufferMemory
from langchain.agents import initialize_agent
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
agent_executor = initialize_agent(
tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,memory=memory
)
agent_executor.run(input="What is inflation in the Czech Republic?")
```
got
```
ValidationError Traceback (most recent call last)
Cell In[13], line 1
----> 1 agent_executor = initialize_agent(
2 tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,memory=memory
3 )
4 agent_executor.run(input="What is inflation in the Czech Republic?")
File F:\anaconda\lib\site-packages\langchain\agents\initialize.py:57, in initialize_agent(tools, llm, agent, callback_manager, agent_path, agent_kwargs, tags, **kwargs)
55 agent_cls = AGENT_TO_CLASS[agent]
56 agent_kwargs = agent_kwargs or {}
---> 57 agent_obj = agent_cls.from_llm_and_tools(
58 llm, tools, callback_manager=callback_manager, **agent_kwargs
59 )
60 elif agent_path is not None:
61 agent_obj = load_agent(
62 agent_path, llm=llm, tools=tools, callback_manager=callback_manager
63 )
File F:\anaconda\lib\site-packages\langchain\agents\conversational\base.py:115, in ConversationalAgent.from_llm_and_tools(cls, llm, tools, callback_manager, output_parser, prefix, suffix, format_instructions, ai_prefix, human_prefix, input_variables, **kwargs)
105 cls._validate_tools(tools)
106 prompt = cls.create_prompt(
107 tools,
108 ai_prefix=ai_prefix,
(...)
113 input_variables=input_variables,
114 )
--> 115 llm_chain = LLMChain(
116 llm=llm,
117 prompt=prompt,
118 callback_manager=callback_manager,
119 )
120 tool_names = [tool.name for tool in tools]
121 _output_parser = output_parser or cls._get_default_output_parser(
122 ai_prefix=ai_prefix
123 )
File F:\anaconda\lib\site-packages\langchain\load\serializable.py:74, in Serializable.__init__(self, **kwargs)
73 def __init__(self, **kwargs: Any) -> None:
---> 74 super().__init__(**kwargs)
75 self._lc_kwargs = kwargs
File F:\anaconda\lib\site-packages\pydantic\main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for LLMChain
llm
value is not a valid dict (type=type_error.dict)
```
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Connect LlamaIndex with Langchain
### Expected behavior
Load custom LLM | Llama_index model as a tool for lang chain | https://api.github.com/repos/langchain-ai/langchain/issues/7309/comments | 5 | 2023-07-06T22:34:36Z | 2023-07-09T20:42:18Z | https://github.com/langchain-ai/langchain/issues/7309 | 1,792,391,896 | 7,309 |
[
"hwchase17",
"langchain"
]
| ### Feature request
It would be nice to add a maximal_marginal_relevance to the MongoDBAtlasVectorSearch vectorstore
### Motivation
This will bring help users to get more diverse results than the ones only based on the relevance score
### Your contribution
I'll write a PR | MongoDBAtlasVectorSearch vectorstore - add maximal_marginal_relevance method | https://api.github.com/repos/langchain-ai/langchain/issues/7304/comments | 2 | 2023-07-06T21:24:25Z | 2023-10-12T16:05:25Z | https://github.com/langchain-ai/langchain/issues/7304 | 1,792,265,347 | 7,304 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: 0.0.225
Python version: 3.8.5
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When ground truth and model prediction are both empty strings, the evaluation model returns 'INCORRECT'. I expect the evalution to return 'CORRECT'
I ran the below piece of code.
```
llm_bit = AzureChatOpenAI(deployment_name='gpt-4-32k', temperature=0)
test_gt = [{'question': 'What is the name of the company?', 'gt': 'Company A'}, {'question': 'What is the name of the building', 'gt': ''}]
test_output = [{'question': 'What is the name of the company?', 'prediction': 'Company A'}, {'question': 'What is the name of the building', 'prediction': ''}]
eval_chain = QAEvalChain.from_llm(llm_bit)
temp = eval_chain.evaluate(
test_gt, test_output, question_key="question", answer_key="gt", prediction_key="prediction"
)
temp
```
### Expected behavior
Received output: [{'text': 'CORRECT'}, {'text': 'INCORRECT'}]
Expected output: [{'text': 'CORRECT'}, {'text': 'CORRECT'}] | Evaluation returns 'INCORRECT' when ground truth is empty and prediction is empty | https://api.github.com/repos/langchain-ai/langchain/issues/7303/comments | 1 | 2023-07-06T21:20:21Z | 2023-10-12T16:05:30Z | https://github.com/langchain-ai/langchain/issues/7303 | 1,792,259,575 | 7,303 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
I got this code that i want to move to the model gpt3.5-turbo since its 10 times cheaper than the text-davinci-003 but i get this error
```
ValueError: OpenAIChat currently only supports single prompt, got ["Use the following portion of a long document to see if any of the text is relevant to answer the question. \nReturn any relevant text verbatim.\nW H I T E PA P E R\nTHE FUTURE OF ART\nRelease\nV 3 . 1\nTable of Contents\n01 Abstract 02 Riwa Tech\nPg 3\nPg
4\n03 Market research 04 Technology\nPg 5\nPg 7\n05 Why we do this 06 How Riwa Tech works\nPg 12\nPg 14\n07 The future 08 Team 09 Coin distribution 10 Business model 11 Timeline\nPg 20\nPg 21\nPg 22\nPg 24\nPg 26\n2\nAbstract\nArt and antiques have always been an integral part of the global economy, and\nthis remains true
today. With the rise of digital platforms and technologies,\ntransaction methods have been revolutionized, but challenges such as\nprovenance, authentication, protection and preservation of cultural heritage\npersist. This white paper proposes integrating blockchain technology to improve\nthe industry's landscape and protect its unique value. Blockchain can provide\nsecure, transparent, and tamper-proof records for art and antiques, addressing\nnumerous issues. By combining traditional values and innovative technology, we\nQuestion: hi\nRelevant text, if any:", 'Use the following portion of a long document to see if any of the text is relevant to answer the question. \nReturn any relevant text verbatim.\nTag. The owners can know the current location anytime they want.\nBlockchain\nAppreciation, trading and collecting of artworks are gradually becoming a part of\npeople’s life pursuits. In the development of the art market industry, collectibles\nlack clear records of transactions and evidence systems that can be veri\x00ed,\nmaking it almost impossible to determine the source information of collectibles.\nCollectibles do not have an “ID” system, resulting in no records for artworks. This\nlack of traceability in the industry can easily lead to counterfeiters taking\nadvantage of the situation, resulting in a proliferation of counterfeit artworks and\naffecting the development of the industry.\nOwners who deposit collectibles to Riwa’s ecosystem will get NFTs backed by the\ncollectible. The NFT smart contract will inherit the basic and anti-counterfeit\ndetails. For every future transaction of
the collectible, the smart contract will\nQuestion: hi\nRelevant text, if any:', "Use the following portion of a long document to see if any of the text is relevant to answer the question. \nReturn any relevant text verbatim.\nRiwa's advanced 3D technology to create 3D NFT collections and establish their\nown virtual spaces for exhibitions and other purposes.\n20\nTeam\nVIONA ZHANG Founder & chairman Experienced in commercializing artists and artworks, with a successful record in corporate and brand management.\nPIERRE BRUNETOT CEO Ex-CEO and founder of Sante Nature with extensive experience in marketing and strategy.\nYINJI DAI Co-Founder
& Sales Manager Manager of the Asia Region and Co-founder. Over 17 years of experience in art and antiques industry management.\nAASHIR IFTIKHAR Chief Technical Of\x00cer at HashPotato Over 3 years of experience in mobile application development. Expert in Full Stack development.\nEDOUARD BRUNETOT COO CEO of Cobound helps
businesses grow through marketing, sales, and customer service tools.\nFABIEN CERCLET Sales manager Over 7 years in blockchain tech & economics, established strong marketing foundation.\n21\nCoin distribution\nInitial Coin Offering (ICO)\nQuestion: hi\nRelevant text, if any:", "Use the following portion of a long document
to see if any of the text is relevant to answer the question. \nReturn any relevant text verbatim.\nand transaction reliability of artworks within the market.\n1. Riwa dual anti-counterfeiting\n1.1 Electronic Tag (E-Tag) management system\nRiwa's E-Tag technology management system is ef\x00cient, accurate and reliable.\nThe system can automatically read real-time artwork\ninformation and\ndynamically track and detect artwork locations through an electronic map,\nimproving the timeliness and accuracy of issue detection. Each Riwa E-Tag has a\nunique identity code assigned to the artwork or antique it represents, and the\ntags are physically
non-replicable and indestructible. With large storage capacity,\nlong service life, and adaptability to indoor and outdoor environments, Riwa's E-\nTag also allows contactless information collection, pollution resistance, and high\nreliability.\n7\nUsers can access the Riwa system by sensing an item's E-Tag with a smartphone,\nobtaining detailed features, inspection count, origin, ownership change records,\nQuestion: hi\nRelevant text, if any:"]
```
I'm using RetrievalQAWithSourcesChain and FAISS, this is the code
```python
import os
from langchain.document_loaders import UnstructuredURLLoader
from langchain.text_splitter import CharacterTextSplitter
import pickle
import faiss
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
from langchain.chains import RetrievalQAWithSourcesChain
from langchain.chains.question_answering import load_qa_chain
from langchain import OpenAI
os.environ["OPENAI_API_KEY"] = "Apikey"
urls = [
'https://riwa.nftify.network/collection/riwa-nft'
]
loaders = UnstructuredURLLoader(urls=urls)
data = loaders.load()
docs = text_splitter.split_documents(data)
embeddings = OpenAIEmbeddings()
vectorStore_openAI = FAISS.from_documents(docs, embeddings)
with open("faiss_store_openai.pkl", "wb") as f:
pickle.dump(vectorStore_openAI, f)
with open("faiss_store_openai.pkl", "rb") as f:
VectorStore = pickle.load(f)
llm=OpenAI(temperature=0,model_name="gpt-3.5-turbo",max_tokens=32)
chain = RetrievalQAWithSourcesChain.from_llm(llm=llm, retriever=VectorStore.as_retriever())
question=input("What question you want to ask? : ")
print(chain({"question": str(question)}, return_only_outputs=True)["answer"])
```
I would really appreciate if someone could give me some guidance, i've been blocked in this problem for a while
### Idea or request for content:
_No response_ | DOC: <ValueError: OpenAIChat currently only supports single prompt, got ["Use the following portion of a long document to see if any of the text is relevant to answer the question."]'DOC: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/7302/comments | 3 | 2023-07-06T21:08:00Z | 2023-11-24T16:08:09Z | https://github.com/langchain-ai/langchain/issues/7302 | 1,792,242,224 | 7,302 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain=0.0.225, python=3.9.17, openai=0.27.8
openai.api_type = "azure", openai.api_version = "2023-05-15"
api_base, api_key, deployment_name environment variables all configured.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce the behavior:
Note: This code is pulled directly from document loaders chapter of Langchain Chat With Your Data course with Harrison Chase and Andrew Ng. It downloads an audio file of a public youtube video and generates a transcript.
1. In a Jupyter notebook, configure your Azure OpenAI environment variables and add this code:
```
from langchain.document_loaders.generic import GenericLoader
from langchain.document_loaders.parsers import OpenAIWhisperParser
from langchain.document_loaders.blob_loaders.youtube_audio import YoutubeAudioLoader
```
2. Create and run a new cell with this inside:
```
url="<https://www.youtube.com/watch?v=jGwO_UgTS7I>"
save_dir="docs/youtube/"
loader = GenericLoader( YoutubeAudioLoader([url],save_dir), OpenAIWhisperParser() )
docs = loader.load()
```
4. At the transcribing step, it will fail on "InvalidRequestError".
Successfully executes the following steps:
```
[youtube] Extracting URL: https://www.youtube.com/watch?v=jGwO_UgTS7I
[youtube] jGwO_UgTS7I: Downloading webpage
[youtube] jGwO_UgTS7I: Downloading ios player API JSON
[youtube] jGwO_UgTS7I: Downloading android player API JSON
[youtube] jGwO_UgTS7I: Downloading m3u8 information
[info] jGwO_UgTS7I: Downloading 1 format(s): 140
[download] docs/youtube//Stanford CS229: Machine Learning Course, Lecture 1 - Andrew Ng (Autumn 2018).m4a has already been downloaded
[download] 100% of 69.76MiB
[ExtractAudio] Not converting audio docs/youtube//Stanford CS229: Machine Learning Course, Lecture 1 - Andrew Ng (Autumn 2018).m4a; file is already in target format m4a
Transcribing part 1!
```
```
InvalidRequestError Traceback (most recent call last)
Cell In[14], line 8
3 save_dir="docs/youtube/"
4 loader = GenericLoader(
5 YoutubeAudioLoader([url],save_dir),
6 OpenAIWhisperParser()
7 )
----> 8 docs = loader.load()
File /usr/local/lib/python3.9/site-packages/langchain/document_loaders/generic.py:90, in GenericLoader.load(self)
88 def load(self) -> List[Document]:
89 """Load all documents."""
---> 90 return list(self.lazy_load())
File /usr/local/lib/python3.9/site-packages/langchain/document_loaders/generic.py:86, in GenericLoader.lazy_load(self)
84 """Load documents lazily. Use this when working at a large scale."""
85 for blob in self.blob_loader.yield_blobs():
---> 86 yield from self.blob_parser.lazy_parse(blob)
File /usr/local/lib/python3.9/site-packages/langchain/document_loaders/parsers/audio.py:51, in OpenAIWhisperParser.lazy_parse(self, blob)
49 # Transcribe
50 print(f"Transcribing part {split_number+1}!")
---> 51 transcript = openai.Audio.transcribe("whisper-1", file_obj)
53 yield Document(
54 page_content=transcript.text,
55 metadata={"source": blob.source, "chunk": split_number},
56 )
File /usr/local/lib/python3.9/site-packages/openai/api_resources/audio.py:65, in Audio.transcribe(cls, model, file, api_key, api_base, api_type, api_version, organization, **params)
55 requestor, files, data = cls._prepare_request(
56 file=file,
57 filename=file.name,
(...)
62 **params,
63 )
64 url = cls._get_url("transcriptions")
---> 65 response, _, api_key = requestor.request("post", url, files=files, params=data)
66 return util.convert_to_openai_object(
67 response, api_key, api_version, organization
68 )
File /usr/local/lib/python3.9/site-packages/openai/api_requestor.py:298, in APIRequestor.request(self, method, url, params, headers, files, stream, request_id, request_timeout)
277 def request(
278 self,
279 method,
(...)
286 request_timeout: Optional[Union[float, Tuple[float, float]]] = None,
287 ) -> Tuple[Union[OpenAIResponse, Iterator[OpenAIResponse]], bool, str]:
288 result = self.request_raw(
289 method.lower(),
290 url,
(...)
296 request_timeout=request_timeout,
297 )
--> 298 resp, got_stream = self._interpret_response(result, stream)
299 return resp, got_stream, self.api_key
File /usr/local/lib/python3.9/site-packages/openai/api_requestor.py:700, in APIRequestor._interpret_response(self, result, stream)
692 return (
693 self._interpret_response_line(
694 line, result.status_code, result.headers, stream=True
695 )
696 for line in parse_stream(result.iter_lines())
697 ), True
698 else:
699 return (
--> 700 self._interpret_response_line(
701 result.content.decode("utf-8"),
702 result.status_code,
703 result.headers,
704 stream=False,
705 ),
706 False,
707 )
File /usr/local/lib/python3.9/site-packages/openai/api_requestor.py:763, in APIRequestor._interpret_response_line(self, rbody, rcode, rheaders, stream)
761 stream_error = stream and "error" in resp.data
762 if stream_error or not 200 <= rcode < 300:
--> 763 raise self.handle_error_response(
764 rbody, rcode, resp.data, rheaders, stream_error=stream_error
765 )
766 return resp
InvalidRequestError: Resource not found
```
Usually, with "resource not found" errors, the message will tell you to input api_key or deployment_name. I'm not sure what this means, as none of the Loader methods take in these as params.
### Expected behavior
Expected behavior is to finish four parts of transcription and "load" as doc in docs variable. | langchain.document_loaders.generic GenericLoader not working on Azure OpenAI - InvalidRequestError: Resource Not Found, cannot detect declared resource | https://api.github.com/repos/langchain-ai/langchain/issues/7298/comments | 5 | 2023-07-06T19:16:57Z | 2024-02-10T16:22:03Z | https://github.com/langchain-ai/langchain/issues/7298 | 1,792,095,489 | 7,298 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain: 0.0.225
OS: Arch Linux
Python: 3.11
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Use any of langchain's openai chat agents with the memory/chat history feature results in the chat history/memory being sent to the openai API in the SYSTEM message, and with incorrect roles specified.
### Expected behavior
While that might be appropriate for certain types of message (Maybe compressed or summarized from older conversations), I expected the chat history memory to be utilizing openai's [messages](https://platform.openai.com/docs/api-reference/chat#chat/create-messages) parameter.
It's much easier to parse (If ever needed) since it's an array of messages. This is related to https://github.com/hwchase17/langchain/issues/7285 which is an even bigger issue that addressing this one could resolve. | OpenAI Chat agents don't make use of OpenAI API `messages` parameter. | https://api.github.com/repos/langchain-ai/langchain/issues/7291/comments | 3 | 2023-07-06T17:42:57Z | 2023-10-14T20:10:48Z | https://github.com/langchain-ai/langchain/issues/7291 | 1,791,972,879 | 7,291 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain: v0.0.225
OS: Ubuntu 22.04
### Who can help?
@agola11
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
### Code
```python
import langchain
from chromadb.config import Settings
from langchain.callbacks.streaming_stdout import BaseCallbackHandler
from langchain.chains import ConversationalRetrievalChain
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.llms import LlamaCpp
from langchain.memory import ConversationBufferMemory
from langchain.schema.document import Document
from langchain.vectorstores import Chroma
langchain.debug = True
class DocumentCallbackHandler(BaseCallbackHandler):
def on_retriever_end(
self,
documents: Sequence[Document],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
**kwargs: Any,
) -> Any:
print(f"on_retriever_end() CALLED with {len(documents)} documents")
def setup():
llm = LlamaCpp(
model_path="models/GPT4All-13B-snoozy.ggml.q5_1.bin",
n_ctx=4096,
n_batch=8192,
callbacks=[],
verbose=False,
use_mlock=True,
n_gpu_layers=60,
n_threads=8,
)
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
db = Chroma(
persist_directory="./db",
embedding_function=embeddings,
client_settings=Settings(
chroma_db_impl="duckdb+parquet",
persist_directory="./db",
anonymized_telemetry=False,
),
)
retriever = db.as_retriever(search_kwargs={"k": 4})
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
return ConversationalRetrievalChain.from_llm(
llm=llm, retriever=retriever, memory=memory, callbacks=[DocumentCallbackHandler()]
)
def main():
qa = setup()
while True:
question = input("\nEnter your question: ")
answer = qa(question)["answer"]
print(f"\n> Answer: {answer}")
if __name__ == "__main__":
main()
```
### Output
```
ggml_init_cublas: found 1 CUDA devices:
Device 0: Quadro RTX 6000
llama.cpp: loading model from models/GPT4All-13B-snoozy.ggml.q5_1.bin
llama_model_load_internal: format = ggjt v2 (pre #1508)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 4096
llama_model_load_internal: n_embd = 5120
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 40
llama_model_load_internal: n_layer = 40
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 9 (mostly Q5_1)
llama_model_load_internal: n_ff = 13824
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 13B
llama_model_load_internal: ggml ctx size = 0.09 MB
llama_model_load_internal: using CUDA for GPU acceleration
llama_model_load_internal: mem required = 2165.28 MB (+ 1608.00 MB per state)
llama_model_load_internal: allocating batch_size x 1 MB = 512 MB VRAM for the scratch buffer
llama_model_load_internal: offloading 40 repeating layers to GPU
llama_model_load_internal: offloading non-repeating layers to GPU
llama_model_load_internal: offloading v cache to GPU
llama_model_load_internal: offloading k cache to GPU
llama_model_load_internal: offloaded 43/43 layers to GPU
llama_model_load_internal: total VRAM used: 11314 MB
....................................................................................................
llama_init_from_file: kv self size = 3200.00 MB
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 |
Enter your question: Should Hamlet end his life?
[chain/start] [1:chain:ConversationalRetrievalChain] Entering Chain run with input:
{
"question": "Should Hamlet end his life?",
"chat_history": []
}
[chain/start] [1:chain:ConversationalRetrievalChain > 3:chain:StuffDocumentsChain] Entering Chain run with input:
[inputs]
[chain/start] [1:chain:ConversationalRetrievalChain > 3:chain:StuffDocumentsChain > 4:chain:LLMChain] Entering Chain run with input:
{
"question": "Should Hamlet end his life?",
"context": "Enter Hamlet.\n\nEnter Hamlet.\n\nEnter Hamlet.\n\nHaply the seas, and countries different,\n With variable objects, shall expel\n This something-settled matter in his heart,\n Whereon his brains still beating puts him thus\n From fashion of himself. What think you on't?\n Pol. It shall do well. But yet do I believe\n The origin and commencement of his grief\n Sprung from neglected love.- How now, Ophelia?\n You need not tell us what Lord Hamlet said.\n We heard it all.- My lord, do as you please;"
}
[llm/start] [1:chain:ConversationalRetrievalChain > 3:chain:StuffDocumentsChain > 4:chain:LLMChain > 5:llm:LlamaCpp] Entering LLM run with input:
{
"prompts": [
"Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\nEnter Hamlet.\n\nEnter Hamlet.\n\nEnter Hamlet.\n\nHaply the seas, and countries different,\n With variable objects, shall expel\n This something-settled matter in his heart,\n Whereon his brains still beating puts him thus\n From fashion of himself. What think you on't?\n Pol. It shall do well. But yet do I believe\n The origin and commencement of his grief\n Sprung from neglected love.- How now, Ophelia?\n You need not tell us what Lord Hamlet said.\n We heard it all.- My lord, do as you please;\n\nQuestion: Should Hamlet end his life?\nHelpful Answer:"
]
}
llama_print_timings: load time = 1100.49 ms
llama_print_timings: sample time = 13.20 ms / 17 runs ( 0.78 ms per token)
llama_print_timings: prompt eval time = 1100.33 ms / 208 tokens ( 5.29 ms per token)
llama_print_timings: eval time = 1097.70 ms / 16 runs ( 68.61 ms per token)
llama_print_timings: total time = 2270.30 ms
[llm/end] [1:chain:ConversationalRetrievalChain > 3:chain:StuffDocumentsChain > 4:chain:LLMChain > 5:llm:LlamaCpp] [2.27s] Exiting LLM run with output:
{
"generations": [
[
{
"text": " I'm sorry, I don't know the answer to that question.",
"generation_info": null
}
]
],
"llm_output": null,
"run": null
}
[chain/end] [1:chain:ConversationalRetrievalChain > 3:chain:StuffDocumentsChain > 4:chain:LLMChain] [2.27s] Exiting Chain run with output:
{
"text": " I'm sorry, I don't know the answer to that question."
}
[chain/end] [1:chain:ConversationalRetrievalChain > 3:chain:StuffDocumentsChain] [2.27s] Exiting Chain run with output:
{
"output_text": " I'm sorry, I don't know the answer to that question."
}
[chain/end] [1:chain:ConversationalRetrievalChain] [5.41s] Exiting Chain run with output:
{
"answer": " I'm sorry, I don't know the answer to that question."
}
> Answer: I'm sorry, I don't know the answer to that question.
```
### Expected behavior
I expect the `on_retriever_end()` callback to be called immediately after documents are retrieved. I'm not sure what I'm doing wrong. | on_retriever_end() not called with ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/7290/comments | 12 | 2023-07-06T16:51:42Z | 2024-04-25T16:11:49Z | https://github.com/langchain-ai/langchain/issues/7290 | 1,791,902,494 | 7,290 |
[
"hwchase17",
"langchain"
]
| ### System Info
Mac OS Ventura 13.3.1 (a)
Python 3.10.8
LangChain 0.0.224
### Who can help?
@hwchase17
@hinthornw
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Reproduction Steps:
1. Run the following
```
from langchain.llms import OpenAI
from langchain.indexes import GraphIndexCreator
from langchain.chains import GraphQAChain
from langchain.prompts import PromptTemplate
text = "Apple announced the Vision Pro in 2023."
index_creator = GraphIndexCreator(llm=OpenAI(openai_api_key='{OPEN_AI_KEY_HERE}', temperature=0))
graph = index_creator.from_text(text)
chain = GraphQAChain.from_llm(
OpenAI(temperature=0, openai_api_key='{OPEN_AI_KEY_HERE}'),
graph=graph,
verbose=True
)
chain.run("When did Apple announce the Vision Pro?")
```
2. Observe the "Full Context" output in your terminal and notice that the two triplets are concatenated onto a single line with no spacing in between them.
I believe the issue is in the code [here](https://github.com/hwchase17/langchain/blob/681f2678a357268c435c18f19323ccb50cac079c/langchain/chains/graph_qa/base.py#L80). When only 1 triplet is found in an iteration, `.join` does not add any `\n` characters, resulting in a context string with no separation between triplets.
### Expected behavior
Expected: A multi-line string with each triplet text on its own line (delimited by `"\n"`)
In the above repro steps, I would expect
```
Full Context:
Apple announced Vision Pro
Vision Pro was announced in 2023
``` | Input in GraphQAChain Prompt is Malformed When Only 1 Triplet Is Found | https://api.github.com/repos/langchain-ai/langchain/issues/7289/comments | 2 | 2023-07-06T16:25:05Z | 2023-07-07T21:19:54Z | https://github.com/langchain-ai/langchain/issues/7289 | 1,791,868,997 | 7,289 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
If we run a `Search` on the python.langchain.com/docs/ we got the result as several clickable fields. Those fields are the URLs of the found results. But the fields are too short to show the URLs. We cannot see what LangChain doc pages were found. We see just the start of the URL string, like `1. python.langchain.com/docs/module...` All found fields shows the same above text, which is useless.

### Idea or request for content:
Several ideas on how to fix it:
1. make the result fields longer and place them one after another in the list.
2. show the last part of the URL string not the start of the URL string. | DOC: Search functionality: `Verified Sources:` fields unreadable | https://api.github.com/repos/langchain-ai/langchain/issues/7288/comments | 1 | 2023-07-06T16:17:48Z | 2023-10-05T22:06:37Z | https://github.com/langchain-ai/langchain/issues/7288 | 1,791,859,217 | 7,288 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain 0.0.223
Linux
Python 3.11
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
something like this
```python
chat_history = PostgresChatMessageHistory( # Just a slight mod of the postgres class for sorting the results by date
connection_string=config('SUPABASE_POSTGRES_CONNECT_STRING'),
session_id="58964243-23cd-41fe-ad05-ecbfd2a73202", # str(uuid.uuid4()),
table_name="chat_history"
)
memory = ConversationBufferMemory(
memory_key="chat_history",
chat_memory=chat_history,
return_messages=True,
human_prefix="USER", # This doesn't work.
ai_prefix="ASSISTANT",) # This doesn't work.
agent = ChatAgent(
name="Chat Assistant",
tools=_tools,
agent_type=AgentType.OPENAI_MULTI_FUNCTIONS,
llm=openai
)
```
If I look at what was prompted and in postgres, it always shows "Human" and "AI"
### Expected behavior
I expect USER and ASSISTANT to be used everywhere after I set it. I see this as especially important when using openai's chat endpoint since their models were trained using these tokens.
I also think it would be better to load the memory/chat history as the openai API provides parameters for (As a list of messages) instead of in the SYSTEM message, but perhaps that's for another issue. | Can't use human_prefix and ai_prefix with agent | https://api.github.com/repos/langchain-ai/langchain/issues/7285/comments | 3 | 2023-07-06T15:57:25Z | 2024-04-01T11:21:35Z | https://github.com/langchain-ai/langchain/issues/7285 | 1,791,828,808 | 7,285 |
[
"hwchase17",
"langchain"
]
| ### System Info
When I initialise ChatAnthropic(), it got the error:
anthropic_version = packaging.version.parse(version("anthropic"))
AttributeError: module 'packaging' has no attribute 'version'
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.chat_models import ChatOpenAI, ChatAnthropic
llm = ChatAnthropic()
### Expected behavior
As shown above. | anthropic_version = packaging.version.parse(version("anthropic")) AttributeError: module 'packaging' has no attribute 'version' | https://api.github.com/repos/langchain-ai/langchain/issues/7283/comments | 5 | 2023-07-06T15:35:39Z | 2023-10-14T20:10:57Z | https://github.com/langchain-ai/langchain/issues/7283 | 1,791,794,342 | 7,283 |
[
"hwchase17",
"langchain"
]
| ### System Info
Getting ''' ValueError: Unable to send PDF to Mathpix''' while using MathpixPDFLoader.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
math_pix_loaded = MathpixPDFLoader(file_path)
load_list_mathpix = math_pix_loaded.load()
### Expected behavior
A list of pages to be returned. | MathpixPDFLoader doesn't work. | https://api.github.com/repos/langchain-ai/langchain/issues/7282/comments | 5 | 2023-07-06T15:15:17Z | 2023-10-07T17:05:46Z | https://github.com/langchain-ai/langchain/issues/7282 | 1,791,761,739 | 7,282 |
[
"hwchase17",
"langchain"
]
| ### Feature request
It would be nice to have a similar function as `get_openai_callback()` for VertexAI. This actually gives the input tokens, output tokens and cost of using OpenAI models:
```python
with get_openai_callback() as cb:
llm = OpenAI(temperature=0)
chat = ChatOpenAI(temperature=0)
emb = OpenAIEmbeddings()
output_llm = llm("As I was saying,")
print(output_llm)
# System message + Human Message
messages = [
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love programming.")
]
output_chat = chat(messages)
print(output_chat)
print(cb)
```
I would like to have:
```python
with get_vertexai_callback() as cb:
llm = VertexAI(temperature=0)
chat = ChatVertexAI(temperature=0)
emb = VertexAIEmbeddings()
print(llm("As I was saying,"))
# System message + Human Message
messages = [
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love programming.")
]
print(chat(messages))
```
### Motivation
I would like to monitor my usage of VertexAI models
### Your contribution
I already read quite a bit the openai version of the callback but if anyone has already thought about how to do it with vertex I would be really curious :). If someone else also planned to do it we could merge efforts! | Callback for VertexAI to monitor cost and token consumption | https://api.github.com/repos/langchain-ai/langchain/issues/7280/comments | 8 | 2023-07-06T14:50:29Z | 2024-06-05T10:48:30Z | https://github.com/langchain-ai/langchain/issues/7280 | 1,791,718,932 | 7,280 |
[
"hwchase17",
"langchain"
]
| You can pass filter to an kNN in elasticsearch.
This is currently implemented in the langchain wrapper in the exact KNN search.
But is not yet implemented in the approximate KNN search.
Add filter param to [_default_knn_query](https://github.com/jeffvestal/langchain/blob/10f34bf62e2d53f0b1a7b15ba21c2328b64862cd/langchain/vectorstores/elastic_vector_search.py#L398), [knn_search](https://github.com/jeffvestal/langchain/blob/10f34bf62e2d53f0b1a7b15ba21c2328b64862cd/langchain/vectorstores/elastic_vector_search.py#L432), [knn_hybrid_search](https://github.com/jeffvestal/langchain/blob/10f34bf62e2d53f0b1a7b15ba21c2328b64862cd/langchain/vectorstores/elastic_vector_search.py#L488).
cc: @jeffvestal, @derickson
### Suggestion:
_No response_ | Allow filter to be passed in ElasticKnnSearch knn_search and knn_hybrid_search | https://api.github.com/repos/langchain-ai/langchain/issues/7277/comments | 1 | 2023-07-06T13:56:39Z | 2023-09-20T14:35:44Z | https://github.com/langchain-ai/langchain/issues/7277 | 1,791,621,321 | 7,277 |
[
"hwchase17",
"langchain"
]
| ### Feature request
This adds support for Apache SOLRs vector search capabilities
(https://solr.apache.org/guide/solr/latest/query-guide/dense-vector-search.html)
### Motivation
As SOLR is a commonly used search index and now offers this feature, it is important to allow SOLR users to be able to integrate seamlessly with LangChain (and the associated benefits).
### Your contribution
I can try submitting a PR | [FEATURE] SOLR Based Vector Store | https://api.github.com/repos/langchain-ai/langchain/issues/7273/comments | 8 | 2023-07-06T12:44:39Z | 2024-02-15T16:11:20Z | https://github.com/langchain-ai/langchain/issues/7273 | 1,791,490,032 | 7,273 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
```
def handle_embeddings(payload):
loader = UnstructuredPDFLoader(payload["filepath"])
documents = loader.load()
text_splitter = SpacyTextSplitter(pipeline=payload["language"], chunk_size=1536, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings(**payload)
qdrant = Qdrant.from_documents(
docs, embeddings,
path=path,
collection_name=collection_name,
)
import dramatiq
from dramatiq.brokers.redis import RedisBroker
from tasks import handle_embeddings
redis_broker = RedisBroker(url='redis://redis.helloreader.docker/10')
dramatiq.set_broker(redis_broker)
@dramatiq.actor(max_retries = 0)
def handle_embeddings_task(payload):
result = handle_embeddings(payload)
return result
```
Due to the time-consuming nature of embeddings and storing them in a vector database, I opted for asynchronous queue tasks to handle them. However, I noticed that when processing documents of size 30 MB, the memory usage of the queue task kept increasing until it eventually crashed due to overflow. At this point, I investigated and found that the memory overflow occurred even before the embeddings interface was called, indicating that the issue was with the `Qdrant.from_documents` method. I have been searching for the root cause for a while but haven't found it yet.
### Suggestion:
I hope someone who is familiar with the `Qdrant.from_documents` method or has knowledge of other possible causes can help me resolve this issue.
The document size of approximately 30 MB corresponds to approximately 560,000 tokens.
During the process, I tried using Dramatiq, Celery, and RQ, and encountered the same issue with all of them. Therefore, we can exclude the possibility of the issue being specific to these queue tools. | 'Qdrant.from_documents' Memory overflow | https://api.github.com/repos/langchain-ai/langchain/issues/7272/comments | 8 | 2023-07-06T12:25:00Z | 2023-10-16T16:06:24Z | https://github.com/langchain-ai/langchain/issues/7272 | 1,791,458,701 | 7,272 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am using Tools and Agents to query on different vectorstores. But when I am asking question which is not from the vectorstore.It responds i dont know. So is there any approach i can try where if the answer is not from the vectorstore i can carry out the conversation like chatgpt. If Yes? Can you Please let me know how we can integrate this
### Suggestion:
_No response_ | Langchain Tools and Agents | https://api.github.com/repos/langchain-ai/langchain/issues/7269/comments | 5 | 2023-07-06T10:11:15Z | 2023-12-01T16:09:13Z | https://github.com/langchain-ai/langchain/issues/7269 | 1,791,243,008 | 7,269 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain v0.0.225, Windows10, Python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
一个来自langchain的事例代码如下:
from langchain import PromptTemplate, LLMChain
from langchain.llms import GPT4All
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
local_path = (
"ggml-gpt4all-j-v1.3-groovy.bin" # replace with your desired local file path
)
callbacks = [StreamingStdOutCallbackHandler()]
llm = GPT4All(model=local_path, callbacks=callbacks, verbose=True)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)
执行的结果如下:
TypeError: GPT4All.generate() got an unexpected keyword argument 'n_ctx'
当我加上参数 max_tokens=200 时:llm = GPT4All(model=local_path, max_tokens=200, callbacks=callbacks, verbose=True)
出现如下结果:
ValidationError: 1 validation error for GPT4All
max_tokens
extra fields not permitted (type=value_error.extra)
### Expected behavior
callbacks = [StreamingStdOutCallbackHandler()]
llm = GPT4All(model=local_path, max_tokens=200, callbacks=callbacks, verbose=True)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)
#可以输出正确结果 | langchain + gpt4all 执行异常,总提示参数问题 | https://api.github.com/repos/langchain-ai/langchain/issues/7268/comments | 3 | 2023-07-06T09:22:31Z | 2023-10-14T20:11:07Z | https://github.com/langchain-ai/langchain/issues/7268 | 1,791,159,677 | 7,268 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain v0.0.225, Windows, Python 3.10
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The behavior for `CharacterTextSplitter` when changing `keep_separator` when using normal characters is like this:
```python
text_splitter = CharacterTextSplitter(
chunk_size=4,
chunk_overlap=0,
separator="_",
keep_separator=False,
)
text_splitter.split_text("foo_bar_baz_123")
# ['foo', 'bar', 'baz', '123']
```
```python
text_splitter = CharacterTextSplitter(
chunk_size=4,
chunk_overlap=0,
separator="_",
keep_separator=True,
)
text_splitter.split_text("foo_bar_baz_123")
# ['foo', '_bar', '_baz', '_123']
```
However, when using special regex characters like `.` or `*` the splitter breaks when `keep_separator` is `False`.
```python
text_splitter = CharacterTextSplitter(
chunk_size=4,
chunk_overlap=0,
separator=r"\.",
keep_separator=False,
)
text_splitter.split_text("foo.bar.baz.123")
# ['foo.bar.baz.123']
```
The special characters should be escaped, otherwise it raises an error. For example, the following code raises an error.
```python
text_splitter = CharacterTextSplitter(
chunk_size=4,
chunk_overlap=0,
separator=r"?",
keep_separator=True,
)
text_splitter.split_text("foo?bar?baz?123")
```
I'll make a PR to fix this.
Also, the documentation never mentions that the separator should be a regex, I only found out the hard way after getting regex errors on one of the `RecursiveTextSplitter` splitters after updating LangChain. I think we should add a note about this in the documentation or the code.
### Expected behavior
```python
text_splitter = CharacterTextSplitter(
chunk_size=4,
chunk_overlap=0,
separator=r"\.",
keep_separator=False,
)
text_splitter.split_text("foo.bar.baz.123")
# ['foo', 'bar', 'baz', '123']
``` | Inconsistent behavior of `CharacterTextSplitter` when changing `keep_separator` for special regex characters | https://api.github.com/repos/langchain-ai/langchain/issues/7262/comments | 1 | 2023-07-06T07:57:36Z | 2023-07-06T13:54:13Z | https://github.com/langchain-ai/langchain/issues/7262 | 1,791,023,162 | 7,262 |
[
"hwchase17",
"langchain"
]
| ### System Info
Error: Please install chromadb as a dependency with, e.g. `npm install -S chromadb`
at Function.imports (file:///home/aqib/backend-11/vision/node_modules/langchain/dist/vectorstores/chroma.js:160:19)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async Chroma.ensureCollection (file:///home/aqib/backend-11/vision/node_modules/langchain/dist/vectorstores/chroma.js:61:42)
at async Chroma.addVectors (file:///home/aqib/backend-11/vision/node_modules/langchain/dist/vectorstores/chroma.js:88:28)
at async Chroma.addDocuments (file:///home/aqib/backend-11/vision/node_modules/langchain/dist/vectorstores/chroma.js:56:9)
at async Function.fromDocuments (file:///home/aqib/backend-11/vision/node_modules/langchain/dist/vectorstores/chroma.js:145:9)
at async add (file:///home/aqib/backend-11/vision/vectorStore/db.js:24:5)
at async run (file:///home/aqib/backend-11/vision/vectorStore/insertDocs.js:6:3)
node:internal/process/promises:279
triggerUncaughtException(err, true /* fromPromise */);
^
**_Got this when I console Logged the error_**
ReferenceError: fetch is not defined
at Object.<anonymous> (/home/aqib/backend-11/vision/node_modules/chromadb/dist/main/generated/runtime.js:17:24)
at Module._compile (node:internal/modules/cjs/loader:1196:14)
at Object.Module._extensions..js (node:internal/modules/cjs/loader:1250:10)
at Module.load (node:internal/modules/cjs/loader:1074:32)
at Function.Module._load (node:internal/modules/cjs/loader:909:12)
at Module.require (node:internal/modules/cjs/loader:1098:19)
at require (node:internal/modules/cjs/helpers:108:18)
at Object.<anonymous> (/home/aqib/backend-11/vision/node_modules/chromadb/dist/main/generated/api.js:17:19)
at Module._compile (node:internal/modules/cjs/loader:1196:14)
at Object.Module._extensions..js (node:internal/modules/cjs/loader:1250:10)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Call any Chroma method to reproduce this issue
### Expected behavior
It should've inserted the documents | Chroma DB Error | https://api.github.com/repos/langchain-ai/langchain/issues/7260/comments | 11 | 2023-07-06T07:46:56Z | 2024-03-13T19:55:57Z | https://github.com/langchain-ai/langchain/issues/7260 | 1,791,006,392 | 7,260 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I'm using the `OpenAIEmbeddings` object to create embeddings for my list of documents. While it expects a list of strings, the `embed_documents` function seems misleading as to accept a list of documents. As a result, passing a list of documents raises `AttributeError` since `Document` objects do not have a `replace` method.
### Suggestion:
1. Add a `replace` method to `Document` (or)
2. Extend the function `embed_documents` to handle `Document` objects (or)
3. Rename the function to suit handling a list of strings
I will be glad to implement one of them as a first issue! | Issue: confusion about `Document` or string input for `embed_documents` function | https://api.github.com/repos/langchain-ai/langchain/issues/7259/comments | 3 | 2023-07-06T07:32:37Z | 2023-10-05T17:49:36Z | https://github.com/langchain-ai/langchain/issues/7259 | 1,790,985,762 | 7,259 |
[
"hwchase17",
"langchain"
]
| I was using openai functions agent with custom functions , the custom function(loan eligibility) needs three augments , state, age and income .
when i run the agent with question "how much can i borrow in state CA?" , it is directly calling function without asking age and income from the user .
below is the error
pydantic.error_wrappers.ValidationError: 2 validation errors for LoanEligibilitySchema
age
field required (type=value_error.missing)
income
field required (type=value_error.missing)
how to fix this ?
### Suggestion:
_No response_ | OpenAI Functions Agent not asking required parameter value | https://api.github.com/repos/langchain-ai/langchain/issues/7255/comments | 1 | 2023-07-06T06:19:57Z | 2023-10-12T16:06:00Z | https://github.com/langchain-ai/langchain/issues/7255 | 1,790,892,343 | 7,255 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain 0.0.225, Python 3.10, Windows
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am creating a structured chat agent using an `AgentExecutor.from_agent_and_tools`. I have added a custom variable `new_variable` to `input_variables`, and followed the instructions on how to add memory as per here: https://python.langchain.com/docs/modules/agents/agent_types/structured_chat
`def get_agent_executor(llm: ChatOpenAI, tools: list[Tool], chat_history: MessagesPlaceholder, memory: ConversationBufferMemory) -> AgentExecutor:
input_variables = ["input", "agent_scratchpad", "chat_history", "new_variable"]
prefix = CUSTOM_PREFIX
suffix = CUSTOM_SUFFIX
custom_prompt = StructuredChatAgent.create_prompt(tools, prefix=prefix, suffix=suffix,
input_variables=input_variables, memory_prompts=[chat_history])
llm_chain = LLMChain(llm=llm, prompt=custom_prompt, verbose=True)
convo_agent = StructuredChatAgent(llm_chain=llm_chain)
agent_executor = AgentExecutor.from_agent_and_tools(agent=convo_agent, tools=tools, verbose=True, max_iterations=1,
memory=memory, memory_prompts=[chat_history], input_variables=input_variables,
handle_parsing_errors="Check your output and make sure it is a markdown code snippet of a json blob with a single action!")
return agent_executor`
This agent crashes every time at the end of the first iteration:
`final_outputs: Dict[str, Any] = self.prep_outputs(
self.memory.save_context(inputs, outputs)
input_str, output_str = self._get_input_output(inputs, outputs)
prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)
raise ValueError(f"One input key expected got {prompt_input_keys}")
ValueError: One input key expected got ['example_good_page', 'input']`
### Expected behavior
save_context without hiccup | Structured Chat Agent cannot save_context when memory has additional input variables | https://api.github.com/repos/langchain-ai/langchain/issues/7254/comments | 1 | 2023-07-06T04:49:18Z | 2023-10-12T16:06:05Z | https://github.com/langchain-ai/langchain/issues/7254 | 1,790,798,010 | 7,254 |
[
"hwchase17",
"langchain"
]
| null | How to parse docx/pdf file which contains text table and image?Also, we need to classify text table and image, maybe operations are different?thanks | https://api.github.com/repos/langchain-ai/langchain/issues/7252/comments | 2 | 2023-07-06T04:35:02Z | 2023-10-12T16:06:10Z | https://github.com/langchain-ai/langchain/issues/7252 | 1,790,785,354 | 7,252 |
[
"hwchase17",
"langchain"
]
| ### Feature request
how to disable the OpenAI initialization when you're not using an OpenAI model.
[Please check this issue](https://github.com/hwchase17/langchain/issues/7189#issuecomment-1621931461)
### Motivation
I am trying to build a VectorstoreIndexCreator using the following configuration
embeddings= SentenceTransformerEmbeddings embeddings
vectorstore_cls = Chroma
llm = HuggingfaceHub Model
Note: I am not using any openai model as llm or embedding purpose
here is the code
```
from langchain.indexes import VectorstoreIndexCreator
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings import SentenceTransformerEmbeddings
from langchain.llms import HuggingFaceHub
from langchain.document_loaders import PyPDFLoader
loader = PyPDFLoader("pdffile.pdf")
embeddings = SentenceTransformerEmbeddings(model_name="paraphrase-MiniLM-L6-v2")
model = HuggingFaceHub(repo_id="facebook/mbart-large-50",
model_kwargs={"temperature": 0, "max_length":200},
huggingfacehub_api_token=HUGGING_FACE_API_KEY)
index = VectorstoreIndexCreator(
embedding=embeddings,
vectorstore_cls=Chroma,
text_splitter=CharacterTextSplitter(chunk_size=100,chunk_overlap=0)
).from_loaders([loader])
result = index.query(llm=model,question=query,chain_type="refine")
```
But still iam getting the open_ai key dependency error when i run the code
```
Did not find openai_api_key, please add an environment variable `OPENAI_API_KEY` which contains it, or pass `openai_api_key` as a named parameter. (type=value_error)
```
using langchain version: langchain==0.0.219
### Your contribution
To disable the OpenAI initialization when you're not using an OpenAI model | How to disable the OpenAI initialization when you're not using an OpenAI model | https://api.github.com/repos/langchain-ai/langchain/issues/7251/comments | 3 | 2023-07-06T04:19:04Z | 2024-03-02T14:38:20Z | https://github.com/langchain-ai/langchain/issues/7251 | 1,790,771,160 | 7,251 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain : 0.0.223
os: mac Ventura 13.4.1 max
python: 3.11.3
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have compiled the llama.cpp project into mps, and tested the support for gpu acceleration under mps through the command, but when calling the cpp model in langchain, I found, When I set n_gpu_layer to 1, the gpu acceleration of mps is not turned on, below is the code and output by it.
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm=LlamaCpp(model_path='./zh-models/33B/ggml-model-q4_K.bin', n_ctx="2048", n_gpu_layers=1, callback_manager = callback_manager, verbose = True)
llm('tell me a joke')
`llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 49954
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 6656
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 52
llama_model_load_internal: n_layer = 60
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 2 (mostly Q4_0)
llama_model_load_internal: n_ff = 17920
llama_model_load_internal: model size = 30B
llama_model_load_internal: ggml ctx size = 0.14 MB
llama_model_load_internal: mem required = 19884.88 MB (+ 3124.00 MB per state)
llama_new_context_with_model: kv self size = 3120.00 MB
AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 |
`
### Expected behavior
support mps gpu acceleration | Does LlamaCPP currently not support the gpu acceleration of mps when n_gpu_layer to 1? | https://api.github.com/repos/langchain-ai/langchain/issues/7249/comments | 0 | 2023-07-06T03:50:02Z | 2023-07-06T03:56:20Z | https://github.com/langchain-ai/langchain/issues/7249 | 1,790,747,965 | 7,249 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Add `show_progress_bar` within `OpenAIEmbeddings` class.
### Motivation
Simply speaking,
1. Showing progress bar within existing progress bar is generally not a good practice. Most of time it will break.
2. There might be people who want to keep their console quiet.
### Your contribution
I will make a PR | Make tqdm optional for OpenAIEmbeddings | https://api.github.com/repos/langchain-ai/langchain/issues/7246/comments | 1 | 2023-07-06T02:21:17Z | 2023-07-06T03:58:55Z | https://github.com/langchain-ai/langchain/issues/7246 | 1,790,672,805 | 7,246 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
A continuation of #7126,
Pinecone features in Langchain that are not mentioned in Langchain's documentation. https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/pinecone
### Idea or request for content:
Add documentation for the other Pinecone functions. | DOC: Pinecone functions need more documentation | https://api.github.com/repos/langchain-ai/langchain/issues/7243/comments | 1 | 2023-07-06T00:15:48Z | 2023-10-12T16:06:20Z | https://github.com/langchain-ai/langchain/issues/7243 | 1,790,553,866 | 7,243 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain Version: 0.0.224
Platform: Mac
Python Version: 3.10.9
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [x] Callbacks/Tracing
- [x] Async
### Reproduction
Copy/Paste of example snippet from official documentation:
https://python.langchain.com/docs/modules/callbacks/how_to/async_callbacks
import asyncio
from typing import Any, Dict, List
from langchain.chat_models import ChatOpenAI
from langchain.schema import LLMResult, HumanMessage
from langchain.callbacks.base import AsyncCallbackHandler, BaseCallbackHandler
from dotenv import load_dotenv
load_dotenv()
class MyCustomSyncHandler(BaseCallbackHandler):
def on_llm_new_token(self, token: str, **kwargs) -> None:
print(f"Sync handler being called in a `thread_pool_executor`: token: {token}")
class MyCustomAsyncHandler(AsyncCallbackHandler):
"""Async callback handler that can be used to handle callbacks from langchain."""
async def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> None:
"""Run when chain starts running."""
print("zzzz....")
await asyncio.sleep(0.3)
class_name = serialized["name"]
print("Hi! I just woke up. Your llm is starting")
async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
"""Run when chain ends running."""
print("zzzz....")
await asyncio.sleep(0.3)
print("Hi! I just woke up. Your llm is ending")
# To enable streaming, we pass in `streaming=True` to the ChatModel constructor
# Additionally, we pass in a list with our custom handler
chat = ChatOpenAI(
max_tokens=25,
streaming=True,
callbacks=[MyCustomSyncHandler(), MyCustomAsyncHandler()],
)
async def main():
await chat.agenerate([[HumanMessage(content="Tell me a joke")]])
if __name__ == '__main__':
asyncio.run(main())
#####################################################
Output:
zzzz....
Error in MyCustomAsyncHandler.on_llm_start callback: 'name'
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 1.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 2.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 8.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 16.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Traceback (most recent call last):
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/aiohttp/connector.py", line 980, in _wrap_create_connection
return await self._loop.create_connection(*args, **kwargs) # type: ignore[return-value] # noqa
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/base_events.py", line 1097, in create_connection
transport, protocol = await self._create_connection_transport(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/base_events.py", line 1127, in _create_connection_transport
await waiter
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/sslproto.py", line 534, in data_received
ssldata, appdata = self._sslpipe.feed_ssldata(data)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/sslproto.py", line 188, in feed_ssldata
self._sslobj.do_handshake()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/ssl.py", line 975, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/openai/api_requestor.py", line 592, in arequest_raw
result = await session.request(**request_kwargs)
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/aiohttp/client.py", line 536, in _request
conn = await self._connector.connect(
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/aiohttp/connector.py", line 540, in connect
proto = await self._create_connection(req, traces, timeout)
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/aiohttp/connector.py", line 901, in _create_connection
_, proto = await self._create_direct_connection(req, traces, timeout)
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/aiohttp/connector.py", line 1206, in _create_direct_connection
raise last_exc
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/aiohttp/connector.py", line 1175, in _create_direct_connection
transp, proto = await self._wrap_create_connection(
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/aiohttp/connector.py", line 982, in _wrap_create_connection
raise ClientConnectorCertificateError(req.connection_key, exc) from exc
aiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host api.openai.com:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)')]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/pnsvk/my-apps/p_y/llm-apps/async_callbacks.py", line 49, in <module>
asyncio.run(main())
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/Users/pnsvk/my-apps/p_y/llm-apps/async_callbacks.py", line 45, in main
res = await chat.agenerate([[HumanMessage(content="Tell me a joke")]])
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/langchain/chat_models/base.py", line 191, in agenerate
raise exceptions[0]
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/langchain/chat_models/base.py", line 292, in _agenerate_with_cache
return await self._agenerate(
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 408, in _agenerate
async for stream_resp in await acompletion_with_retry(
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 95, in acompletion_with_retry
return await _completion_with_retry(**kwargs)
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/tenacity/_asyncio.py", line 86, in async_wrapped
return await fn(*args, **kwargs)
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/tenacity/_asyncio.py", line 48, in __call__
do = self.iter(retry_state=retry_state)
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/tenacity/__init__.py", line 362, in iter
raise retry_exc.reraise()
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/tenacity/__init__.py", line 195, in reraise
raise self.last_attempt.result()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/tenacity/_asyncio.py", line 51, in __call__
result = await fn(*args, **kwargs)
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 93, in _completion_with_retry
return await llm.client.acreate(**kwargs)
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 45, in acreate
return await super().acreate(*args, **kwargs)
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 217, in acreate
response, _, api_key = await requestor.arequest(
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/openai/api_requestor.py", line 304, in arequest
result = await self.arequest_raw(
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/openai/api_requestor.py", line 609, in arequest_raw
raise error.APIConnectionError("Error communicating with OpenAI") from e
openai.error.APIConnectionError: Error communicating with OpenAI
### Expected behavior
As mentioned in the official documentation:
zzzz....
Hi! I just woke up. Your llm is starting
Sync handler being called in a `thread_pool_executor`: token:
Sync handler being called in a `thread_pool_executor`: token: Why
Sync handler being called in a `thread_pool_executor`: token: don
Sync handler being called in a `thread_pool_executor`: token: 't
Sync handler being called in a `thread_pool_executor`: token: scientists
Sync handler being called in a `thread_pool_executor`: token: trust
Sync handler being called in a `thread_pool_executor`: token: atoms
Sync handler being called in a `thread_pool_executor`: token: ?
Sync handler being called in a `thread_pool_executor`: token:
Sync handler being called in a `thread_pool_executor`: token: Because
Sync handler being called in a `thread_pool_executor`: token: they
Sync handler being called in a `thread_pool_executor`: token: make
Sync handler being called in a `thread_pool_executor`: token: up
Sync handler being called in a `thread_pool_executor`: token: everything
Sync handler being called in a `thread_pool_executor`: token: .
Sync handler being called in a `thread_pool_executor`: token:
zzzz....
Hi! I just woke up. Your llm is ending
LLMResult(generations=[[ChatGeneration(text="Why don't scientists trust atoms? \n\nBecause they make up everything.", generation_info=None, message=AIMessage(content="Why don't scientists trust atoms? \n\nBecause they make up everything.", additional_kwargs={}, example=False))]], llm_output={'token_usage': {}, 'model_name': 'gpt-3.5-turbo'}) | Async callback has issues | https://api.github.com/repos/langchain-ai/langchain/issues/7242/comments | 2 | 2023-07-06T00:03:04Z | 2023-07-06T12:43:26Z | https://github.com/langchain-ai/langchain/issues/7242 | 1,790,539,904 | 7,242 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Include source document index information in the combine document chain prompt. For example in `_get_inputs` of `StuffDocumentChain` we could make the following change (in addition to corresponding changes to `format_document`):
```
# From
doc_strings = [format_document(doc, self.document_prompt) for doc in docs]
# To
doc_strings = [format_document(doc, i, self.document_prompt) for i, doc in enumerate(docs)]
```
### Motivation
The point of this change is to enable QA based chains (e.g. `ConversationalRetrievalChain`) to easily do inline citations using the source document's index.
## Example
### Prompt
```
Context:
[1] Harrison went to Harvard.
[2] Ankush went to Princeton.
[3] Emma went to Yale.
Question:
Where did Harrison and Emma go to college?
```
### Response
```
Harrison went to Harvard【1】 and Emma went to Yale【3】.
```
This type of structure is also found in popular "QA" models like Bing Chat and ChatGPT's Browse feature (WebGPT etc.). I feel like there should at least be high-level options to enable something like this without having to make custom modifications/extensions to the existing chains.
Without explicitly including these document indices, I find prompting the LLM to cite documents by index could lead to hallucinated citations (e.g. with retrieved `k=4` documents it cites "[8]")
## More Details
As far as I can tell, existing QA implementations in Langchain seem to return source documents separately (i.e. `return_source_documents=True`), or at the end of the response (e.g. `{'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'}` as in [document-qa-with-sources](https://python.langchain.com/docs/modules/chains/additional/question_answering#document-qa-with-sources)) rather than provide them in-line.
Even newer approaches using OpenAI's Functions API e.g. from `create_citation_fuzzy_match_chain` and `create_qa_with_sources_chain` do not provide this option.
The approach of [QA with sources chain](https://python.langchain.com/docs/modules/chains/additional/openai_functions_retrieval_qa) has a drawback that the LLM has to generate the entire source of the document (e.g. full URL). This is slower because we can just reference the index of the document instead. Also, this is prone to hallucination especially with `gpt-3.5` where fake sources (like URLs) could be generated.
Similarly, the method in [Question-Answering Citations](https://python.langchain.com/docs/modules/chains/additional/qa_citations) provides quotes from source documents, but doesn't actually identify which document they're from. Referencing documents by index should help reduce hallucination and generation speed here as well.
### Your contribution
I'm happy to assist with this, but first I'd like to gather feedback on the idea. It's possible that there are existing approaches or best practices that I'm not familiar with, which could facilitate inline citations without additional implementation. If there are any recommendations on how to proceed with this, I'd be interested in having a discussion around that. | Add Document Index Information for QA Inline Citations | https://api.github.com/repos/langchain-ai/langchain/issues/7239/comments | 22 | 2023-07-05T22:46:45Z | 2024-05-16T16:06:39Z | https://github.com/langchain-ai/langchain/issues/7239 | 1,790,462,714 | 7,239 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain: '0.0.218'
windows 10
### Who can help?
@dev2049 @homanp
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run the following:
```py
from langchain.chains.openai_functions.openapi import get_openapi_chain
chain = get_openapi_chain("https://chat-web3-plugin.alchemy.com/openapi.yaml")
chain.run("DOES NOT MATTER")
```
Results in endless loop
```shell
Attempting to load an OpenAPI 3.0.0 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.
Attempting to load an OpenAPI 3.0.0 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.
Attempting to load an OpenAPI 3.0.0 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.
Attempting to load an OpenAPI 3.0.0 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.
Attempting to load an OpenAPI 3.0.0 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.
Attempting to load an OpenAPI 3.0.0 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.
Attempting to load an OpenAPI 3.0.0 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.
Attempting to load an OpenAPI 3.0.0 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.
...
```
### Expected behavior
Loop should break | OpenAPISpec functions can get stuck in a loop | https://api.github.com/repos/langchain-ai/langchain/issues/7233/comments | 2 | 2023-07-05T21:31:57Z | 2023-10-12T16:06:26Z | https://github.com/langchain-ai/langchain/issues/7233 | 1,790,374,247 | 7,233 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Still new here so I mimicked the setup of `llm/llamacpp.py` wrapper for the draft of Salesforce's new LLM [XGen](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/91).
### Motivation
Expand the LLM sets integrated in Langchain.
### Your contribution
PR #7221 | Salesforce XGen integration | https://api.github.com/repos/langchain-ai/langchain/issues/7223/comments | 1 | 2023-07-05T19:54:04Z | 2023-07-06T04:53:07Z | https://github.com/langchain-ai/langchain/issues/7223 | 1,790,201,590 | 7,223 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi I found myself having to do a "hack" when using MultiPromptChain.
In particular when ny destination chains take more than one parameter into the template at runtime for example "{current"_timestamp}"
I was getting the following exception
```
File "/home/xxx/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 149, in __call__
inputs = self.prep_inputs(inputs)
File "/home/xxx/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 258, in prep_inputs
self._validate_inputs(inputs)
File "/home/xyz/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 103, in _validate_inputs
raise ValueError(f"Missing some input keys: {missing_keys}")
ValueError: Missing some input keys: {'current_timestamp'}
```
I had the below configuration:
```
destinations = [f"{p['name']}: {p['description']}" for p in tools.prompt_infos]
destinations_str = "\n".join(destinations)
router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations_str,
current_timestamp=datetime.now().isoformat())
router_prompt = PromptTemplate(
template=router_template,
input_variables=["input"],
output_parser=RouterOutputParser(),
)
router_chain = LLMRouterChain.from_llm(llm, router_prompt)
chain = MultiPromptChain(
router_chain=router_chain,
destination_chains=tools.chains,
default_chain=default_chain
verbose=True,
)
```
which had to be modified
first:
I used modified router template
```
MY_MULTI_PROMPT_ROUTER_TEMPLATE = """\
Given a raw text input to a language model select the model prompt best suited for \
the input. You will be given the names of the available prompts and a description of \
what th
MY_MULTI_PROMPT_ROUTER_TEMPLATE = """\
Given a raw text input to a language model select the model prompt best suited for \
the input. You will be given the names of the available prompts and a description of \
what the prompt is best suited for. You may also revise the original input if you \
think that revising it will ultimately lead to a better response from the language \
model.
<< FORMATTING >>
Return a markdown code snippet with a JSON object formatted to look like:
```json
{{{{
"destination": string \\ name of the prompt to use or "DEFAULT"
"next_inputs": dict \\ dictionary of the with two fields "input" containing original input and current_timestamp containing {current_timestamp}
}}}}
```
REMEMBER: "destination" MUST be one of the candidate prompt names specified below OR
it can be "DEFAULT" if the input is not well suited for any of the candidate prompts.
REMEMBER: "next_inputs" can just be the original input if you don't think any \
modifications are needed.
<< CANDIDATE PROMPTS >>
{destinations}
<< INPUT >>
{{input}}
<< OUTPUT >>
"""e prompt is best suited for. You may also revise the original input if you \
think that revising it will ultimately lead to a better response from the language \
model.
<< FORMATTING >>
Return a markdown code snippet with a JSON object formatted to look like:
```json
{{{{
"destination": string \\ name of the prompt to use or "DEFAULT"
"next_inputs": dict \\ dictionary of the with two fields "input" containing original input and current_timestamp containing {current_timestamp}
}}}}
```
REMEMBER: "destination" MUST be one of the candidate prompt names specified below OR
it can be "DEFAULT" if the input is not well suited for any of the candidate prompts.
REMEMBER: "next_inputs" can just be the original input if you don't think any \
modifications are needed.
<< CANDIDATE PROMPTS >>
{destinations}
<< INPUT >>
{{input}}
<< OUTPUT >>
"""
```
I also had to change the implementation of RouterOutputParser where I changed
line 102 from
```
parsed["next_inputs"] = {self.next_inputs_inner_key: parsed["next_inputs"]}
to
parsed["next_inputs"] = parsed["next_inputs"]
```
This has allowed me to return and pass the desired next_inputs dict including current_timestamp into the destination chain
my chain initialization has changed to the following
```
destinations = [f"{p['name']}: {p['description']}" for p in tools.prompt_infos]
destinations_str = "\n".join(destinations)
router_template = MY_MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations_str,
current_timestamp=datetime.now().isoformat())
router_prompt = PromptTemplate(
template=router_template,
input_variables=["input"],
output_parser=MyRouterOutputParser(next_inputs_inner_key="input", next_inputs_type=dict),
)
router_chain = LLMRouterChain.from_llm(llm, router_prompt)
chain = MultiPromptChain(
router_chain=router_chain,
destination_chains=tools.chains,
default_chain=tools.chains[ConversationStage.CASUAL.value],
verbose=True,
)
```
so I managed to pass the correct parameters to the destination chain but then I faced a new issue which was
File "/home/xxx/venv/lib/python3.10/site-packages/langchain/memory/utils.py", line 21, in get_prompt_input_key
raise ValueError(f"One input key expected got {prompt_input_keys}")
ValueError: One input key expected got ['current_timestamp', 'input']
after investigation I had to define a new MyConversationBufferMemory
```
class MyConversationBufferMemory(ConversationBufferMemory):
other_keys: list = []
@property
def memory_variables(self) -> List[str]:
"""Will always return list of memory variables.
:meta private:
"""
return [self.memory_key] + self.other_keys
```
and instead of creating my memory in this way
`memory = ConversationBufferMemory(memory_key="chat_history", chat_memory=get_history(user_id))
`
I have created it in this way
`memory = MyConversationBufferMemory(memory_key="chat_history", chat_memory=get_history(user_id),other_keys=['current_timestamp'])`
this has finally allowed me to get get the response from the destination chain.
If someone has followed, would you thing that there is a better way of doing it?
### Suggestion:
_No response_ | Issue: Issue with MutliPromptRouter with memory destination chains | https://api.github.com/repos/langchain-ai/langchain/issues/7220/comments | 2 | 2023-07-05T19:50:23Z | 2023-10-12T16:06:31Z | https://github.com/langchain-ai/langchain/issues/7220 | 1,790,197,414 | 7,220 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Add retry policy to VertexAI models.
### Motivation
E.g., when trying to run a summarization chain on many chunks (I reproduce the error with 99 chunks), an exception `ResourceExhausted: 429 Quota exceeded` might be returned by Vertex.
### Your contribution
yes, I'll submit a PR shortly. | Add retries to VertexAI models | https://api.github.com/repos/langchain-ai/langchain/issues/7217/comments | 1 | 2023-07-05T18:45:48Z | 2023-07-10T08:52:37Z | https://github.com/langchain-ai/langchain/issues/7217 | 1,790,092,753 | 7,217 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
More of a clarification than an issue. In the javascript documentation, it describes the Document interface. https://js.langchain.com/docs/modules/schema/document. Does an equivalent thing exist in the Python version. When I try 'from langchain.document import Document` in python, it throws an error?
### Idea or request for content:
_No response_ | DOC: DOCUMENT interface in Javascript but not in Python | https://api.github.com/repos/langchain-ai/langchain/issues/7215/comments | 2 | 2023-07-05T18:15:21Z | 2023-07-05T18:41:14Z | https://github.com/langchain-ai/langchain/issues/7215 | 1,790,052,100 | 7,215 |
[
"hwchase17",
"langchain"
]
| ### System Info
There seem to be some hallucinations involving the president and Michael Jackson.
I use the following, where `data` is loaded using `UnstructuredURLLoader(urls).load()` and `urls` is just a list of URLs I'm interested in. Needless to say that none of the URLs involve Michael Jackson (or the president, for that matter)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
chain = RetrievalQAWithSourcesChain.from_chain_type(
llm= HuggingFaceHub(
repo_id="tiiuae/falcon-7b-instruct",
model_kwargs={"max_new_tokens": 500}
),
chain_type="map_reduce",
retriever=FAISS.from_documents(doc_splitter.split_documents(data),
HuggingFaceEmbeddings()).as_retriever()
)
```
followed by
```
prompt = "some text unrelated to Michael Jackson."
chain({"question": prompt}, return_only_outputs=True)
```
I believe this occurs as part of the `map_reduce.py` file:
```
result, extra_return_dict = self.reduce_documents_chain.combine_docs(
result_docs, callbacks=callbacks, **kwargs
)
```
### Expected behavior
should have provided some answer w.r.t to the provided data, stored in FAISS vectorbase | Undesired outputs when using map_reduce | https://api.github.com/repos/langchain-ai/langchain/issues/7199/comments | 2 | 2023-07-05T13:55:41Z | 2023-07-08T17:20:32Z | https://github.com/langchain-ai/langchain/issues/7199 | 1,789,623,893 | 7,199 |
[
"hwchase17",
"langchain"
]
| ### System Info
**Python : Python 3.9.13
Langchain: langchain==0.0.219
OS : Ubuntu**
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.chains import ConversationalRetrievalChain
from langchain.vectorstores import Chroma
from langchain.embeddings import SentenceTransformerEmbeddings
embeddings = SentenceTransformerEmbeddings(model_name="paraphrase-MiniLM-L6-v2")
from langchain.document_loaders import PyPDFLoader
loader = PyPDFLoader("file.pdf")
documents = loader.load()
from langchain.llms import HuggingFaceHub
model = HuggingFaceHub(repo_id="facebook/mbart-large-50",
model_kwargs={"temperature": 0, "max_length":200},
huggingfacehub_api_token=HUGGING_FACE_API_KEY)
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(chunk_size=100, chunk_overlap=0)
texts=text_splitter.split_documents(documents)
db = Chroma.from_documents(texts, embeddings)
retriever = db.as_retriever(search_type="similarity", search_kwargs={"k": 1})
qa = ConversationalRetrievalChain.from_llm(model,retriever)
chat_history= []
query = "sample query"
result = qa({"question": query,"chat_history":chat_history})
print("\nResult of ConversationalRetrievalChainMethod")
print(result)
```
It return the **result** as follows
{'question': 'sample question', 'chat_history': [], '**answer': "Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.**"}
### Expected behavior
Expecting the answer in the result. But it returns the prompt itself now | ConversationalRetrievalChain return only the prompt not the answer | https://api.github.com/repos/langchain-ai/langchain/issues/7193/comments | 4 | 2023-07-05T11:58:18Z | 2023-11-20T16:06:21Z | https://github.com/langchain-ai/langchain/issues/7193 | 1,789,401,709 | 7,193 |
[
"hwchase17",
"langchain"
]
| https://github.com/hwchase17/langchain/blob/e27ba9d92bd2cc4ac9ed7439becb2d32816fc89c/langchain/llms/huggingface_pipeline.py#L169
should modified to
#response = self.pipeline(prompt)
response = self.pipeline(prompt, **kwargs) | kwargs are forgot to send to huggingface pipeline call | https://api.github.com/repos/langchain-ai/langchain/issues/7192/comments | 3 | 2023-07-05T11:18:41Z | 2023-12-19T00:50:02Z | https://github.com/langchain-ai/langchain/issues/7192 | 1,789,340,428 | 7,192 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
When using AgentType.OPENAI_FUNCTIONS, the error message "openai.error.InvalidRequestError: 'Gmail: Find Email' does not match '^[a-zA-Z0-9_-]{1,64}$' - '[functions.6.name](http://functions.6.name/)'" suggests that the name you are using for the function ('Gmail: Find Email') does not adhere to the naming conventions.
For AgentType.OPENAI_FUNCTIONS, function names can only contain alphanumeric characters, underscores (_), and hyphens (-). The name must be between 1 and 64 characters long.
To resolve this issue, make sure the name you provide for the function complies with the naming rules mentioned above.
If you need further assistance, please provide more details about the specific function you're trying to use, and I'll be happy to help you further.
### Suggestion:
_No response_ | Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/7191/comments | 3 | 2023-07-05T10:53:22Z | 2023-10-12T16:06:36Z | https://github.com/langchain-ai/langchain/issues/7191 | 1,789,300,649 | 7,191 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I have written a code to add the memory to ConversationalRetrievalChain but I am getting an error **"Missing some input keys: {'question'}"**. Below is the snippet of my code
memory = ConversationBufferMemory(
memory_key="chat_history",
input_key="question"
)
chatTemplate = """
Answer the question based on the chat history(delimited by <hs></hs>) and context(delimited by <ctx> </ctx>) below.
-----------
<ctx>
{context}
</ctx>
-----------
<hs>
{chat_history}
</hs>
-----------
Question: {question}
Answer:
"""
promptHist = PromptTemplate(
input_variables=["context", "question", "chat_history"],
template=chatTemplate
)
retriever = chatDb.as_retriever(search_type="similarity", search_kwargs={"k": 2})
qa = ConversationalRetrievalChain.from_llm(
llm=get_openai_model(), chain_type="stuff", retriever=retriever, return_source_documents=True,
verbose = True,
combine_docs_chain_kwargs={'prompt': promptHist},
memory = memory,
)
result = qa({"query": prompt["prompt"]})
### Suggestion:
_No response_ | Issue: Missing some input keys: {'question'} when using ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/7190/comments | 6 | 2023-07-05T10:50:07Z | 2024-03-04T14:32:18Z | https://github.com/langchain-ai/langchain/issues/7190 | 1,789,293,722 | 7,190 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am trying to build a **VectorstoreIndexCreator** using the following configuration
embeddings= **SentenceTransformerEmbeddings** embeddings
vectorstore_cls = **Chroma**
llm = **HuggingfaceHub** Model
Note: I am not using any **openai** model as **llm** or **embedding** purpose
here is the code
```
from langchain.indexes import VectorstoreIndexCreator
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings import SentenceTransformerEmbeddings
from langchain.llms import HuggingFaceHub
from langchain.document_loaders import PyPDFLoader
loader = PyPDFLoader("pdffile.pdf")
embeddings = SentenceTransformerEmbeddings(model_name="paraphrase-MiniLM-L6-v2")
model = HuggingFaceHub(repo_id="facebook/mbart-large-50",
model_kwargs={"temperature": 0, "max_length":200},
huggingfacehub_api_token=HUGGING_FACE_API_KEY)
index = VectorstoreIndexCreator(
embeddings=embeddings,
vectorstore_cls=Chroma,
text_splitter=CharacterTextSplitter(chunk_size=100,chunk_overlap=0)
).from_loaders([loader])
result = index.query(llm=model,qustion=query,chain_type="refine")
```
But still iam getting the open_ai key dependency error when i run the code
```
Did not find openai_api_key, please add an environment variable `OPENAI_API_KEY` which contains it, or pass `openai_api_key` as a named parameter. (type=value_error)
```
using langchain version: **langchain==0.0.219**
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.indexes import VectorstoreIndexCreator
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings import SentenceTransformerEmbeddings
from langchain.llms import HuggingFaceHub
from langchain.document_loaders import PyPDFLoader
loader = PyPDFLoader("pdffile.pdf")
embeddings = SentenceTransformerEmbeddings(model_name="paraphrase-MiniLM-L6-v2")
model = HuggingFaceHub(repo_id="facebook/mbart-large-50",
model_kwargs={"temperature": 0, "max_length":200},
huggingfacehub_api_token=HUGGING_FACE_API_KEY)
index = VectorstoreIndexCreator(
embeddings=embeddings,
vectorstore_cls=Chroma,
text_splitter=CharacterTextSplitter(chunk_size=100,chunk_overlap=0)
).from_loaders([loader])
result = index.query(llm=model,qustion=query,chain_type="refine")
```
### Expected behavior
Dont show any openai error | Did not find openai_api_key, please add an environment variable `OPENAI_API_KEY` which contains it, or pass `openai_api_key` as a named parameter. (type=value_error) | https://api.github.com/repos/langchain-ai/langchain/issues/7189/comments | 11 | 2023-07-05T10:43:47Z | 2024-04-12T15:07:16Z | https://github.com/langchain-ai/langchain/issues/7189 | 1,789,283,892 | 7,189 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python 3.11.4
duckdb==0.8.1
chromadb==0.3.26
langchain==0.0.221
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I persisted my Chroma database in a EC2 instance, where I have all these files:
chroma-collections.parquet
chroma-embeddings.parquet
index/
I download these files for testing the database in my local machine, but I got the error:
> Invalid Input Error: No magic bytes found at end of file 'database/vectors/chroma-embeddings.parquet'
When I tried to:
```
from langchain.vectorstores import Chroma
db = Chroma(embedding_function=embedding_function,
persist_directory=persist_directory,
collection_name=collection_name)
```
Can I download these persisted files and test them on another machine?
### Expected behavior
To read all the embeddings | Load Chroma database: Invalid Input Error: No magic bytes found at end of file 'database/vectors/chroma-embeddings.parquet' | https://api.github.com/repos/langchain-ai/langchain/issues/7188/comments | 2 | 2023-07-05T10:33:28Z | 2023-10-12T16:06:41Z | https://github.com/langchain-ai/langchain/issues/7188 | 1,789,266,410 | 7,188 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
https://python.langchain.com/docs/modules/model_io/models/llms/integrations/sagemaker
This example is not working. I've just copy pasted the code from here and ran it in my notebook instance. I'm using falcon-40b-instruct model.
The error I'm getting is as follows
Error raised by inference endpoint: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (422) from primary with message "Failed to deserialize the JSON body into the target type: missing field `inputs` at line 1 column 509".
### Idea or request for content:
_No response_ | DOC: https://python.langchain.com/docs/modules/model_io/models/llms/integrations/sagemaker | https://api.github.com/repos/langchain-ai/langchain/issues/7187/comments | 4 | 2023-07-05T10:10:48Z | 2024-02-10T16:22:12Z | https://github.com/langchain-ai/langchain/issues/7187 | 1,789,229,545 | 7,187 |
[
"hwchase17",
"langchain"
]
| Hello,
I'm using _langchain_ for QA with court case documents. More specifically, the RetrievalQAWithSourcesChain to retrieve the answer and document source information. However, when running the chain with embedded documents, I get the following error:
```
ValueError: too many values to unpack (expected 2)
Traceback:
response = qa({"question": pregunta}, return_only_outputs=True)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 166, in __call__
raise e
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 160, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\qa_with_sources\base.py", line 132, in _call
answer, sources = re.split(r"SOURCES:\s", answer)
```
The passed documents are the sections from the court case. I added the following **metadata** fields:
1. Source: PDF file name.
2. Section: Name of the section
3. Section_chunk: Numeral value used for identification in case the section was divided into chunks.
4. Page: Page where the section chunk starts.
The documents are passed as retriever to the chain with FAISS (FAISS.from_documents(documents, self.embeddings)).
I tried out two approaches (both resulting in the same error):
1. providing the _load_qa_chain_ as chain
2. creating it using the class method **_.from_chain_type_**
My question is why does this error ocurrs. And also, if the type of metadata used may cause the errors.
Thank you in advance! | Issue: RetrievalQAWithSourcesChain gives error 'too many values to unpack (expected 2)' after running. | https://api.github.com/repos/langchain-ai/langchain/issues/7184/comments | 2 | 2023-07-05T09:49:42Z | 2023-08-16T20:30:16Z | https://github.com/langchain-ai/langchain/issues/7184 | 1,789,191,768 | 7,184 |
[
"hwchase17",
"langchain"
]
| ### System Info
llm = AzureOpenAI(
model_name="gpt-4-32k",
engine="gpt-4-32k"
)
llm("tell me a joke")
Exception:
The completion operation does not works with the specific model, gpt-4-32k, pls choose different model....
Environment:
LangChain:0.0.218
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Environment:
LangChain:0.0.218
llm = AzureOpenAI(
model_name="gpt-4-32k",
engine="gpt-4-32k"
)
llm("tell me a joke")
run the code above
### Expected behavior
the program should work | does llms.AzureOpenAI support gpt4 or gpt-32k? | https://api.github.com/repos/langchain-ai/langchain/issues/7182/comments | 2 | 2023-07-05T09:13:02Z | 2023-10-12T16:06:46Z | https://github.com/langchain-ai/langchain/issues/7182 | 1,789,125,219 | 7,182 |
[
"hwchase17",
"langchain"
]
| ### System Info
```
from langchain.text_splitter import SentenceTransformersTokenTextSplitter
splitter = SentenceTransformersTokenTextSplitter(
tokens_per_chunk=64,
chunk_overlap=0,
model_name='intfloat/e5-base-v2',
add_start_index=True,
)
text = "- afrikaans\n- العربية\n- azərbaycanca\n- বাংলা\n- беларуская\n- bosanski\n- čeština\n- deutsch\n- eesti\n- ελληνικά\n- español\n- فارسی\n- français\n- gaeilge\n- 한국어\n- հայերեն\n- bahasa indonesia\n- עברית\n- jawa\n- kurdî\n- latviešu\n- lietuvių\n- македонски\n- malti\n- मराठी\n- مصرى\n- bahasa melayu\n- nederlands\n- 日本語\n- oʻzbekcha / ўзбекча\n- ਪੰਜਾਬੀ\n- پنجابی\n- پښتو\n- português\n- română\n- русский\n- simple english\n- کوردی\n- suomi\n- తెలుగు\n- ไทย\n- türkçe\n- українська\n- اردو\n- tiếng việt\n- 粵語\n- 中文\nedit links\ncoordinates: 41°43′32″n 49°56′49″w / 41.72556°n 49.94694°w / 41.72556; -49.94694\nfrom wikipedia, the free encyclopedia\n2023 submersible implosion in the atlantic\n|date||18 june 2023|\n|location||north atlantic ocean, near the wreck of the titanic|\n|coordinates||41°43′32″n 49°56′49″w / 41.72556°n 49.94694°w / 41.72556; -49.94694|\n|type||maritime disaster|\n|cause||failure of the pressure hull|\n|participants||5 passengers|\n|outcome||submersible destroyed by implosion|\n|deaths||5 (see fatalities)|\non 18 june 2023, titan, a submersible operated by american tourism and expeditions company oceangate, imploded during an expedition to view the wreck of the titanic in the north atlantic ocean off the coast of newfoundland, canada.on board titan, a submersible operated by american tourism and expeditions company oceangate, were stockton rush, the ceo of american tourism and expeditions company oceangate; paul-henri nargeolet, a french deep sea explorer and the titanic| expert; hamish harding, a british billionaire businessman; shahzada dawood, a pakistani-british billionaire businessman; and dawood's son suleman.communication with titan, a submersible operated by american tourism and expeditions company oceangate, was lost 1 hour and 45 minutes into imploded submersible operated by american tourism and expeditions company oceangate, dive.authorities were alerted when titan, a submersible operated by american tourism and expeditions company oceangate, failed to resurface at the scheduled time later that day.after titan, a submersible operated by american tourism and expeditions company oceangate, had been missing for four days, a remotely operated underwater vehicle (rov) discovered a debris field containing parts of titan, a submersible operated by american tourism and expeditions company oceangate,, about 500 metres (1,600 ft) from the bow of the titanic|.the search area was informed by the united states navy's (usn)"
```
```
"".join(splitter.split_text(text)) == text
False
```
Additional characters are added
```
len("".join(splitter.split_text(text)))
2534
len(text)
2426
```
Newlines are stripped
```
text[:50]
'- afrikaans\n- العربية\n- azərbaycanca\n- বাংলা\n- бел'
splitter.split_text(text)[0][:50]
'- afrikaans - العربية - azərbaycanca - বাংলা - бел'
```
Special tokens are added
```
text[193:293]
'awa\n- kurdî\n- latviešu\n- lietuvių\n- македонски\n- malti\n- मराठी\n- مصرى\n- bahasa melayu\n- nederlands\n-'
"".join(splitter.split_text(text))[200:300]
'awa - kurdi - latviesu - lietuviu - македонски -malti - [UNK] - مصرى - bahasa melayu - nederlands - '
```
Recommended improvement: call
`tokenizer(text, return_offsets_mapping=True)`
This will allow selection of N tokens and allow reconstruction without the need for and allow reconstruction without the use of tokenizer.decode (which is not perfectly invertible).
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Please see my code above
### Expected behavior
Please see my code above | SentenceTransformersTokenTextSplitter Doesn't Preserve Text | https://api.github.com/repos/langchain-ai/langchain/issues/7181/comments | 6 | 2023-07-05T08:29:31Z | 2024-04-22T18:03:57Z | https://github.com/langchain-ai/langchain/issues/7181 | 1,789,052,195 | 7,181 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: 0.0.221
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When trying to index documents return API error
`APIError: {'code': '22P02', 'details': None, 'hint': None, 'message': 'invalid input syntax for type bigint: "54d7bd9c-9822-40ca-ade6-ae173b65d34e"'}`
This code perfectly worked on previous versions of Langchain.
After upgrading, got the error.
Supabase backend didn't change.
`
# load required libraries
from langchain.document_loaders import WebBaseLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import SupabaseVectorStore
from supabase import Client, create_client
import os
from dotenv import load_dotenv
# load document from web
url = "https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/supabase"
loader = WebBaseLoader(url)
documents = loader.load()
# split documents into chunks
docs_splitter = RecursiveCharacterTextSplitter(chunk_size=2500,chunk_overlap=250)
splitted_docs = docs_splitter.split_documents(documents=documents)
# initialize embeddings model
embeddings = OpenAIEmbeddings()
# initialize vector store
supabase_url = os.getenv('SUPABASE_URL')
supabase_key = os.getenv('SUPABASE_KEY')
supabase = create_client(supabase_url, supabase_key)
# save values to supabase
vector_store = SupabaseVectorStore.from_documents(documents=splitted_docs, embedding=embeddings, client=supabase)
`
### Expected behavior
document embeddings should be saved in a Supabase database | SupabaseVectorStore.from_documents returns APIError: {'code': '22P02' | https://api.github.com/repos/langchain-ai/langchain/issues/7179/comments | 4 | 2023-07-05T07:44:06Z | 2023-10-25T16:07:47Z | https://github.com/langchain-ai/langchain/issues/7179 | 1,788,978,302 | 7,179 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
## Problem descriptions
I've used `retrievalQA.from_chain_type()` with `refine` type to design a chatPDF.
But the response often **incomplete**, see the following result, the `Answer` is not complete which will let the `json.loads` not work.
Futhermore, I've used `get_openai_callback` to check if the token exceeds the limit.
Howerver, the callback show the total token is 3432 which didn't exceeds the limit.
## Questions
1. Why the response incomplete?
2. How can I let the response complete then I can do `json.loads`
## Code
```python
from datetime import datetime
from typing import List
import langchain
from langchain.callbacks import get_openai_callback
from langchain.chains import RetrievalQA
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.llms import OpenAI
from langchain.output_parsers import PydanticOutputParser
from langchain.prompts import PromptTemplate
from langchain.vectorstores import Chroma
from langchain.document_loaders import PyMuPDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from pydantic import BaseModel
docs = PyMuPDFLoader('file.pdf').load()
splitter = RecursiveCharacterTextSplitter(docs, chunk_size=1000, chunk_overlap=300)
docs = splitter.split_documents(docs)
class Specification(BaseModel):
product_name: str
manufactured_date: datetime
size_inch: str
resolution: str
contrast: str
operation_temperature: str
power_supply: str
sunlight_readable: bool
antiglare: bool
low_power_consumption: bool
high_brightness: bool
wide_temperature: bool
fast_response: bool
screen_features: List[str]
parser = PydanticOutputParser(pydantic_object=Specification)
prompt_template = """
Use the following pieces of context to answer the question, if you don't know the answer, leave it blank('') don't try to make up an answer.
{context_str}
Question: {question}
{format_instructions}
"""
prompt = PromptTemplate(
template=prompt_template,
input_variables=['context_str', 'question'],
partial_variables={'format_instructions': parser.get_format_instructions()}
)
chain_type_kwargs = {
'question_prompt': prompt,
'verbose': True
}
llm = OpenAI(temperature=0)
embeddings = OpenAIEmbeddings()
db = Chroma.from_documents(
documents=docs,
embedding= embeddings
)
qa_chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type = 'refine',
retriever= db.as_retriever(),
chain_type_kwargs=chain_type_kwargs
)
query = f"""What's the display specifications?"""
with get_openai_callback() as cb:
res = qa_chain.run(query)
print(cb, '\n')
print(res)
```
## Process of Chain
```
[1m> Entering new chain...[0m
[1m> Entering new chain...[0m
Prompt after formatting:
[32;1m[1;3m
Use the following pieces of context to answer the question, if you don't know the answer, leave it blank('') don't try to make up an answer.
Dimensions (W × H × D)
Touch screen: 7.0" × 5.1" × 0.75" (178 × 130 × 19 mm)
Dock: 4.2" × 2.3" × 3.1" (106 × 57 × 78 mm)
Weight
Touch screen: 0.7 lbs. (0.32 kg)
Dock: 0.8 lbs (0.36 kg)
Planning the installation
• Make sure the dock can be located near a power outlet and a strong WiFi signal.
• For the most reliable connection, we recommend running Ethernet to the dock.
• Make sure the touch screen's WiFi or Ethernet connection is on the same network as your controller and that the signal is strong.
• Communication agent is required for intercom use.
• Charge the touch screen for at least six hours before use.
For more product information
Visit ctrl4.co/t3series
Control4® T3-7 7" Tabletop Touch Screen
DOC-00148-C
2015-10-09 MSC
Copyright ©2015, Control4 Corporation. All rights reserved. Control4, the Control4 logo, the 4-ball logo, 4Store,
4Sight, Control My Home, Everyday Easy, and Mockupancy are registered trademarks or trademarks of Control4
Question: What's the display specifications?
The output should be formatted as a JSON instance that conforms to the JSON schema below.
As an example, for the schema {"properties": {"foo": {"title": "Foo", "description": "a list of strings", "type": "array", "items": {"type": "string"}}}, "required": ["foo"]}}
the object {"foo": ["bar", "baz"]} is a well-formatted instance of the schema. The object {"properties": {"foo": ["bar", "baz"]}} is not well-formatted.
Here is the output schema:
{"properties": {"product_name": {"title": "Product Name", "type": "string"}, "manufactured_date": {"title": "Manufactured Date", "type": "string", "format": "date-time"}, "size_inch": {"title": "Size Inch", "type": "string"}, "resolution": {"title": "Resolution", "type": "string"}, "contrast": {"title": "Contrast", "type": "string"}, "operation_temperature": {"title": "Operation Temperature", "type": "string"}, "power_supply": {"title": "Power Supply", "type": "string"}, "sunlight_readable": {"title": "Sunlight Readable", "type": "boolean"}, "antiglare": {"title": "Antiglare", "type": "boolean"}, "low_power_consumption": {"title": "Low Power Consumption", "type": "boolean"}, "high_brightness": {"title": "High Brightness", "type": "boolean"}, "wide_temperature": {"title": "Wide Temperature", "type": "boolean"}, "fast_response": {"title": "Fast Response", "type": "boolean"}, "screen_features": {"title": "Screen Features", "type": "array", "items": {"type": "string"}}}, "required": ["product_name", "manufactured_date", "size_inch", "resolution", "contrast", "operation_temperature", "power_supply", "sunlight_readable", "antiglare", "low_power_consumption", "high_brightness", "wide_temperature", "fast_response", "screen_features"]}
[0m
[1m> Finished chain.[0m
[1m> Entering new chain...[0m
Prompt after formatting:
[32;1m[1;3mThe original question is as follows: What's the display specifications?
We have provided an existing answer:
Answer: {
"product_name": "Control4® T3-7 7\" Tabletop Touch Screen",
"manufactured_date": "2015-10-09",
"size_inch": "7.0\" × 5.1\" × 0.75\"",
"resolution": "N/A",
"contrast": "N/A",
"operation_temperature": "N/A",
"power_supply": "N/A",
"sunlight_readable": "N/A",
"antiglare": "N/A",
"low_power_consumption": "N/A",
"high_brightness": "N/A",
"wide_temperature": "N/A",
"fast_response": "N/A",
"screen_features": []
}
We have the opportunity to refine the existing answer(only if needed) with some more context below.
------------
Model numbers
C4-TT7-1-BL, C4-TT7-1-WH, C4-TT7-1-RD
Features
Screen
Resolution: 1280 × 800
Capacitive touch
Camera: 720p
Network
Ethernet or WiFi (802.11 g/n [2.4 GHz])
Notes:
(1) 802.11b is not supported for Video Intercom.
(2) Wireless-N is recommended for Video Intercom. The more devices that Video Intercom is broadcast to, the
more response time and images are degraded.
Battery
3100 mAh Li-ion
Power supply
PoE (IEEE802.3af)
100VAC ~ 240VAC, 50-60 Hz
International power supply adapters included
Dock connections
•
Ethernet
•
PoE
•
DC power
Mounting
Tabletop or portable
Environmental
Operating temperature
32 ~ 104˚F (0˚ ~ 40˚C)
Storage temperature
32 ~ 104˚F (0˚ ~ 40˚C)
Dimensions (W × H × D)
Touch screen: 7.0" × 5.1" × 0.75" (178 × 130 × 19 mm)
Dock: 4.2" × 2.3" × 3.1" (106 × 57 × 78 mm)
Weight
Touch screen: 0.7 lbs. (0.32 kg)
Dock: 0.8 lbs (0.36 kg)
Planning the installation
• Make sure the dock can be located near a power outlet and a strong WiFi signal.
------------
Given the new context, refine the original answer to better answer the question. If the context isn't useful, return the original answer.[0m
[1m> Finished chain.[0m
[1m> Entering new chain...[0m
Prompt after formatting:
[32;1m[1;3mThe original question is as follows: What's the display specifications?
We have provided an existing answer:
Answer: {
"product_name": "Control4® T3-7 7\" Tabletop Touch Screen",
"manufactured_date": "2015-10-09",
"model_numbers": ["C4-TT7-1-BL", "C4-TT7-1-WH", "C4-TT7-1-RD"],
"size_inch": "7.0\" × 5.1\" × 0.75\"",
"resolution": "1280 × 800",
"contrast": "N/A",
"operation_temperature": "32 ~ 104˚F (0˚ ~ 40˚C)",
"storage_temperature": "32 ~ 104˚F (0˚ ~ 40˚C)",
"power_supply": "PoE (IEEE802.3af) 100VAC ~ 240VAC, 50-60 Hz",
"sunlight_readable": "N/A",
"antiglare": "N/A",
"low_power_consumption": "N/A",
"high_bright
We have the opportunity to refine the existing answer(only if needed) with some more context below.
------------
Control4® T3-7 7" Tabletop Touch Screen
DOC-00148-C
2015-10-09 MSC
Copyright ©2015, Control4 Corporation. All rights reserved. Control4, the Control4 logo, the 4-ball logo, 4Store,
4Sight, Control My Home, Everyday Easy, and Mockupancy are registered trademarks or trademarks of Control4
Corporation in the United States and/or other countries. All other names and brands may be claimed as the
property of their respective owners. All specifications subject to change without notice.
------------
Given the new context, refine the original answer to better answer the question. If the context isn't useful, return the original answer.[0m
[1m> Finished chain.[0m
[1m> Entering new chain...[0m
Prompt after formatting:
[32;1m[1;3mThe original question is as follows: What's the display specifications?
We have provided an existing answer:
Answer: {
"product_name": "Control4® T3-7 7\" Tabletop Touch Screen",
"manufactured_date": "2015-10-09",
"model_numbers": ["C4-TT7-1-BL", "C4-TT7-1-WH", "C4-TT7-1-RD"],
"size_inch": "7.0\" × 5.1\" × 0.75\"",
"resolution": "1280 × 800",
"contrast": "N/A",
"operation_temperature": "32 ~ 104˚F (0˚ ~ 40˚C)",
"storage_temperature": "32 ~ 104˚F (0˚ ~ 40˚C)",
"power_supply": "PoE (IEEE802.3af) 100VAC ~ 240VAC, 50-60 Hz",
"sunlight_readable": "N/A",
"antiglare": "N/A",
"low_power_consumption": "N/A",
"high_bright
We have the opportunity to refine the existing answer(only if needed) with some more context below.
------------
Control4® T3 Series 7" Tabletop Touch Screen
The Control4® T3 Series Tabletop Touch Screen delivers always-on, dedicated, and mobile control over all the technology in your home
or business. Featuring a gorgeous new tablet design and stunning high-resolution graphics, this portable screen looks beautiful whether
on a kitchen countertop or in the theater on your lap. This model includes HD video intercom and crystal-clear audio intercom for
convenient communications from room to room or with visitors at the door.
• Available in a 7" model, the T3 Series Portable Touch Screen provides dedicated, elegant, and mobile control of your home.
• HD camera, combined with speakers and microphone, provides the best video intercom experience yet.
• Crisp picture with two and a half times the resolution of previous models.
• Extremely fast and responsive—up to 16 times faster than our previous touch screens.
------------
Given the new context, refine the original answer to better answer the question. If the context isn't useful, return the original answer.[0m
[1m> Finished chain.[0m
[1m> Finished chain.[0m
Tokens Used: 3432
Prompt Tokens: 2465
Completion Tokens: 967
Successful Requests: 4
Total Cost (USD): $0.06864
```
## Result
```
Tokens Used: 3432
Prompt Tokens: 2465
Completion Tokens: 967
Successful Requests: 4
Total Cost (USD): $0.06864
Answer: {
"product_name": "Control4® T3 Series 7\" Tabletop Touch Screen",
"manufactured_date": "2015-10-09",
"model_numbers": ["C4-TT7-1-BL", "C4-TT7-1-WH", "C4-TT7-1-RD"],
"size_inch": "7.0\" × 5.1\" × 0.75\"",
"resolution": "1280 × 800",
"contrast": "N/A",
"operation_temperature": "32 ~ 104˚F (0˚ ~ 40˚C)",
"storage_temperature": "32 ~ 104˚F (0˚ ~ 40˚C)",
"power_supply": "PoE (IEEE802.3af) 100VAC ~ 240VAC, 50-60 Hz",
"sunlight_readable": "N/A",
"antiglare": "N/A",
"low_power_consumption": "N/A",
"high_brightness
```
### Suggestion:
_No response_ | Issue: RetrievalQA response incomplete | https://api.github.com/repos/langchain-ai/langchain/issues/7177/comments | 1 | 2023-07-05T07:19:18Z | 2023-07-05T07:33:37Z | https://github.com/langchain-ai/langchain/issues/7177 | 1,788,935,367 | 7,177 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
```
# import
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.document_loaders import TextLoader
# load the document and split it into chunks
loader = TextLoader("../../../state_of_the_union.txt")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
# load it into Chroma
db = Chroma.from_documents(docs, embedding_function)
# query it
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
# print results
print(docs[0].page_content)
```
```
# save to disk
db2 = Chroma.from_documents(docs, embedding_function, persist_directory="./chroma_db")
db2.persist()
docs = db.similarity_search(query)
# load from disk
db3 = Chroma(persist_directory="./chroma_db")
docs = db.similarity_search(query)
print(docs[0].page_content)
```
### Idea or request for content:
In above code, I find it difficult to understand this paragraph:
```
# save to disk
db2 = Chroma.from_documents(docs, embedding_function, persist_directory="./chroma_db")
db2.persist()
docs = db.similarity_search(query)
# load from disk
db3 = Chroma(persist_directory="./chroma_db")
docs = db.similarity_search(query)
print(docs[0].page_content)
```
Although `db2 `and `db3 `do demonstrate the saving and loading of Chroma,
But Two pieces of code( `docs = db.similarity_search(query) `) have nothing to do with saving and loading,
and it still searches for answers from the `db`
Is this an error? | saving and loading embedding from Chroma | https://api.github.com/repos/langchain-ai/langchain/issues/7175/comments | 13 | 2023-07-05T06:52:10Z | 2024-07-01T19:22:22Z | https://github.com/langchain-ai/langchain/issues/7175 | 1,788,892,758 | 7,175 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain = 0.0.220
python = 3.11.4
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Open the chat with search langchain/streamlit app[here](https://llm-examples.streamlit.app/Chat_with_search) and go to the "Chat with search" page
2. Ask "current meta and tesla stock price?" in the chat
3. You should see in the response the formatting get messed up because it interprets two dollar signs as a latex equation. Normally I use a function to escape the dollar signs but you may want to do this on your callback.

### Expected behavior
Expect text instead of latex equation. I've attached an example I used with escapes.

| StreamlitCallbackHandler doesn't double escape dollar signs, so two dollar signs makes everything between an equation | https://api.github.com/repos/langchain-ai/langchain/issues/7172/comments | 1 | 2023-07-05T05:23:14Z | 2023-10-12T16:06:56Z | https://github.com/langchain-ai/langchain/issues/7172 | 1,788,788,050 | 7,172 |
[
"hwchase17",
"langchain"
]
| ### System Info
Based on the official doc, I created two type of retriever:
1. `faiss_retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever()`serving as a`VectorStoreRetriever`(referenced from the API doc)
2. `compression_retriever = ContextualCompressionRetriever(base_compressor=relevant_filter, base_retriever=retriever)`functioning as a`ContextualCompressionRetriever` (also referenced from the API doc)
Then I ran the RetrievalQA to get relative content by Chain by code below:
```python
qa = RetrievalQA.from_chain_type(llm=OpenAI( verbose=True), chain_type="stuff", retriever=compression_retriever,return_source_documents=True, chain_type_kwargs=chain_type_kwargs)
# or
qa = RetrievalQA.from_chain_type(llm=OpenAI( verbose=True), chain_type="stuff", retriever=faiss_retriever,return_source_documents=True, chain_type_kwargs=chain_type_kwargs)
```
The result was that the qa with compression_retriever failed to return context for the prompt(return with empty array), whereas the qa with faiss_retriever successfully returned the context.
### Who can help?
@hwchase17 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
loader = UnstructuredFileLoader("NER.txt")
document = loader.load()
separators = ["。", " "]
text_splitter = RecursiveCharacterTextSplitter(separators=separators, chunk_size=500, chunk_overlap=0)
texts = text_splitter.split_documents(document)
embeddings = OpenAIEmbeddings()
relevant_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.81)
retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever() # base retriever
compression_retriever = ContextualCompressionRetriever(base_compressor=relevant_filter, base_retriever=retriever) # document compression retriver
from langchain.prompts import PromptTemplate
prompt_template1 = """plase use context to answer question.
{context}
question: {question}
anwser:"""
PROMPT = PromptTemplate(
template=prompt_template1, input_variables=["context", "question"]
)
chain_type_kwargs = {"prompt": PROMPT,'verbose': True}
qa = RetrievalQA.from_chain_type(llm=OpenAI( verbose=True), chain_type="stuff", retriever=compression_retriever,return_source_documents=True, chain_type_kwargs=chain_type_kwargs)
query = "balabalabala" # replace it with question
result = qa({"query": query})
print(result)```
### Expected behavior
While using `ContextualCompressionRetriever` for `RetrievalQA` could output a not-null context., aka not []. | RetrievalQA.from_chain_type‘s parameter retriever can not use ContextualCompressionRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/7168/comments | 7 | 2023-07-05T02:43:25Z | 2023-10-16T09:45:05Z | https://github.com/langchain-ai/langchain/issues/7168 | 1,788,668,402 | 7,168 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
https://python.langchain.com/docs/get_started/quickstart.html
On the above link, I have tried following along on google colab and ran into an issue as following:
Observation: Olivia Wilde started dating Harry Styles after ending her years-long engagement to Jason Sudeikis — see their relationship timeline.
Thought:I have found information about Olivia Wilde's boyfriend. Now I need to calculate his current age raised to the 0.23 power.
Action:
```
{
"action": "Calculator",
"action_input": "age ^ 0.23"
}
```
As you can see, instead of getting information about Harry Styles' age, it just puts in the string 'age' and raises value error.
This is quite weird seeing the tutorial for agents work well:
https://python.langchain.com/docs/modules/agents.html
What could be the issue?
### Idea or request for content:
_No response_ | DOC: The Quickstart tutorial for Agents has an error | https://api.github.com/repos/langchain-ai/langchain/issues/7166/comments | 3 | 2023-07-05T01:11:06Z | 2023-10-12T16:07:06Z | https://github.com/langchain-ai/langchain/issues/7166 | 1,788,611,749 | 7,166 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I'd like to make the user experience more conversational while supporting OpenAI functions. However, OpenAIFunctionsAgent implementation doesn't accept "memory" to make it more conversational. I'd like to have ReACT planning capability + functions as tools. I think the minimal implementation is to just add a memory to OpenAIFunctionsAgent.
### Motivation
While answering and executing tools is a great feature supported by OpenAIFunctionsAgent, more streamlined user experiences like chat are often desired as well.
### Your contribution
Happy to make a PR with a guideline if this is something desired in langchain. | OpenAIFunctionsAgent + ConversationalChatAgent? | https://api.github.com/repos/langchain-ai/langchain/issues/7163/comments | 6 | 2023-07-04T22:28:54Z | 2023-10-19T16:06:43Z | https://github.com/langchain-ai/langchain/issues/7163 | 1,788,529,698 | 7,163 |
[
"hwchase17",
"langchain"
]
| ### System Info
I'm using the following on Windows:
Python 3.11.3
langchain 0.0.222
lark 1.1.5
With a Pinecone index:
Environment: us-east4-gcp
Metric: cosine
Pod Type: p1.x1
Dimensions: 1536
### Who can help?
@hwchase17 @angola
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.retrievers.self_query.base import SelfQueryRetriever
retriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_description, verbose=True)
embeddings = OpenAIEmbeddings()
vectorstore= Pinecone.from_existing_index(index_name="index1",embedding=embeddings,namespace="metamovies")
# This example specifies a query and composite filter
relevantdocs=retriever.get_relevant_documents(
"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated")
### Expected behavior
a list of selected Documents
I get a runtime error:
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.CONTAIN: 'contain'>, attribute='genre', value='animated')]) limit=None
HTTP response body: {"code":3,"message":"$contain is not a valid operator","details":[]} | SelfQuering Retrieval no support $contain operator | https://api.github.com/repos/langchain-ai/langchain/issues/7157/comments | 2 | 2023-07-04T18:56:27Z | 2023-10-12T16:07:11Z | https://github.com/langchain-ai/langchain/issues/7157 | 1,788,350,317 | 7,157 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Some vectorstores (e.g. Vectara) internally create their own embeddings. The request is to generalize the VectorStore base class to allow for embeddings to be optional.
### Motivation
Currently users have to send "None" or FakeEmbeddings instead, which creates additional work and is not needed.
### Your contribution
Happy to help with a PR (with guidance from the main project team) | Make "embedding" an optional parameter in VectorStore interface | https://api.github.com/repos/langchain-ai/langchain/issues/7150/comments | 2 | 2023-07-04T15:47:14Z | 2023-10-12T16:07:16Z | https://github.com/langchain-ai/langchain/issues/7150 | 1,788,162,432 | 7,150 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I keep getting this error for the past couple of days for gpt-3.5-turbo-16k:
Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=120.0)
The OpenAI API seems to be working fine by itself. Can someone please tell me if they are facing the same issue? Or any suggestions on how to resolve this?
### Suggestion:
_No response_ | Issue: The API call keeps getting timed out | https://api.github.com/repos/langchain-ai/langchain/issues/7148/comments | 13 | 2023-07-04T15:35:27Z | 2024-02-13T16:15:53Z | https://github.com/langchain-ai/langchain/issues/7148 | 1,788,148,043 | 7,148 |
[
"hwchase17",
"langchain"
]
| ### System Info
- langchain-0.0.222 (and all before)
- Any GPT4All python package after [this commit](https://github.com/nomic-ai/gpt4all/commit/46a0762bd5a7e605e9bd63e4f435b482eff026f6#diff-cc3ea7dfbfc9837a4c42dae1089a1eda0ed175d17f2628cf16c13d3cd9da6e13R174) was merged. So latest: >= 1.0.1.
Note this issue is in langchain caused by GPT4All's change. We need to alter the _default_params() return values to exclude many keys that were removed from GPT4All's function kwargs.
### Who can help?
@hwchase17 @agola11 👋
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Install the latest gpt4all package: `pip install --upgrade gpt4all`
2. Use the GPT4All LLM and LLMChain as normal (llm = GPT4All(model="ggml.gpt4all.xyz.bin"); chain = LLMChain(llm, prompt=any_prompt))
3. Run the chain: chain("prompt")
4. TypeError: generate() got an unexpected keyword argument 'n_ctx'
### Expected behavior
Should not cause TypeError. It should not pass n_ctx from default parameters to GPT4All's `generate()` | GPT4All generate() TypeError 'n_ctx' since a commit on GPT4All's python binding changed arguments | https://api.github.com/repos/langchain-ai/langchain/issues/7145/comments | 7 | 2023-07-04T14:22:38Z | 2023-11-07T14:10:16Z | https://github.com/langchain-ai/langchain/issues/7145 | 1,788,030,844 | 7,145 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Hi there 👋
Thanks a lot for the awesome library. The current implementation of `BaseCache` stores the prompt + the llm generated text as key.
This means that I am not really caching since I'll have to do a request to OpenAI to get the llm text
### Motivation
I'd like to cache a prompt
### Your contribution
I am willing to contribute but you need to explain me how :) | Caching: allows to cache only the prompt | https://api.github.com/repos/langchain-ai/langchain/issues/7141/comments | 12 | 2023-07-04T12:46:56Z | 2024-06-27T16:06:04Z | https://github.com/langchain-ai/langchain/issues/7141 | 1,787,867,349 | 7,141 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I would love to add an [H2O Wave](https://wave.h2o.ai/) framework callback integration in a similar manner as done for [Streamlit](https://python.langchain.com/docs/modules/callbacks/integrations/streamlit). Wave has recently added a dedicated chatbot card, which seems like a perfect fit.


### Motivation
This would allow for bringing more diversity when it comes to Python UI frameworks + Langchain integration. Moreover, Wave is async by nature, so seems like [custom async callback](https://python.langchain.com/docs/modules/callbacks/how_to/async_callbacks) or maybe an [async generator](https://github.com/hwchase17/langchain/blob/master/langchain/callbacks/streaming_aiter.py) would do.
### Your contribution
I am willing to provide the PR with everything that is needed. | H2O Wave callback integration | https://api.github.com/repos/langchain-ai/langchain/issues/7139/comments | 5 | 2023-07-04T11:56:42Z | 2024-02-07T16:29:24Z | https://github.com/langchain-ai/langchain/issues/7139 | 1,787,787,874 | 7,139 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I am using the ConversationalRetrievalChain to retrieve answers for questions while condensing the chat history to a standalone question. However, the standalone question would show in the streaming output.
I expect only to return the final answer. Is there any way to achieve it?
### Motivation
The immediate is not necessary for the answer.
### Your contribution
I am not sure. | Returns the standone alone question while using ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/7136/comments | 5 | 2023-07-04T11:02:11Z | 2024-02-06T16:32:56Z | https://github.com/langchain-ai/langchain/issues/7136 | 1,787,705,531 | 7,136 |
[
"hwchase17",
"langchain"
]
| ### System Info
0.0.220
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
self.memory = ConversationBufferMemory(memory_key="chat_history", output_key='output')
self.agent.memory.buffer = self.agent.memory.buffer + Human_prompt + 'AI: ' + AI_prompt
### Expected behavior
no error output | "ConversationBufferMemory" object has no field "buffer" | https://api.github.com/repos/langchain-ai/langchain/issues/7135/comments | 4 | 2023-07-04T09:36:49Z | 2023-10-12T16:07:22Z | https://github.com/langchain-ai/langchain/issues/7135 | 1,787,553,988 | 7,135 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I installed "langchain==0.0.27" on a linux machine but i am getting the folowing error when i try to import langchain in a script. Was running this with python 3.7.
`/home/s0s06c3/lang/lang_env/bin/python /home/s0s06c3/lang/hugging_lanchain.py
Traceback (most recent call last):
File "/home/s0s06c3/lang/hugging_lanchain.py", line 2, in <module>
from langchain import PromptTemplate, HuggingFaceHub, LLMChain
File "/home/s0s06c3/lang/lang_env/lib/python3.7/site-packages/langchain/__init__.py", line 8, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/home/s0s06c3/lang/lang_env/lib/python3.7/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import Agent
File "/home/s0s06c3/lang/lang_env/lib/python3.7/site-packages/langchain/agents/agent.py", line 10, in <module>
from langchain.chains.base import Chain
File "/home/s0s06c3/lang/lang_env/lib/python3.7/site-packages/langchain/chains/__init__.py", line 2, in <module>
from langchain.chains.conversation.base import ConversationChain
File "/home/s0s06c3/lang/lang_env/lib/python3.7/site-packages/langchain/chains/conversation/base.py", line 7, in <module>
from langchain.chains.conversation.memory import ConversationBufferMemory
File "/home/s0s06c3/lang/lang_env/lib/python3.7/site-packages/langchain/chains/conversation/memory.py", line 7, in <module>
from langchain.chains.conversation.prompt import SUMMARY_PROMPT
File "/home/s0s06c3/lang/lang_env/lib/python3.7/site-packages/langchain/chains/conversation/prompt.py", line 2, in <module>
from langchain.prompts.prompt import PromptTemplate
File "/home/s0s06c3/lang/lang_env/lib/python3.7/site-packages/langchain/prompts/__init__.py", line 2, in <module>
from langchain.prompts.base import BasePromptTemplate
File "/home/s0s06c3/lang/lang_env/lib/python3.7/site-packages/langchain/prompts/base.py", line 35, in <module>
class BasePromptTemplate(BaseModel, ABC):
File "/home/s0s06c3/lang/lang_env/lib/python3.7/site-packages/langchain/prompts/base.py", line 41, in BasePromptTemplate
@root_validator()
File "/home/s0s06c3/lang/lang_env/lib/python3.7/site-packages/pydantic/deprecated/class_validators.py", line 231, in root_validator
code='root-validator-pre-skip',
pydantic.errors.PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.`
I tried setting this up on my local machine as well but there also got the same issue.
### Suggestion:
_No response_ | Issue: Can not import the Langchain modules. | https://api.github.com/repos/langchain-ai/langchain/issues/7131/comments | 9 | 2023-07-04T08:07:28Z | 2024-07-25T17:41:17Z | https://github.com/langchain-ai/langchain/issues/7131 | 1,787,396,398 | 7,131 |
[
"hwchase17",
"langchain"
]
| ### System Info
Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised RateLimitError: Rate limit reached for default-text-embedding-ada-002 in organization org-gvlyS3A1UcZNvf8Qch6TJZe3 on tokens per min. Limit: 150000 / min. Current: 1 / min. Contact us through our help center at help.openai.com if you continue to have issues. Please add a payment method to your account to increase your rate limit. Visit https://platform.openai.com/account/billing to add a payment method..
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import FAISS
from langchain.document_loaders import PyPDFLoader
import os
loader = PyPDFLoader("3gpp_cn/29502-i30.pdf")
pages = loader.load_and_split()
embeddings = OpenAIEmbeddings()
db = FAISS.from_documents(pages, embeddings)
db.save_local("numpy_faiss_index")
### Expected behavior
How to solve it? | RateLimitError | https://api.github.com/repos/langchain-ai/langchain/issues/7130/comments | 4 | 2023-07-04T07:50:43Z | 2023-10-12T16:07:27Z | https://github.com/langchain-ai/langchain/issues/7130 | 1,787,368,680 | 7,130 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
During the use of ChatOpenAI, it was found that even if langchain.llm_cache=True is set, the answer is different each time for the same question, such as "What is LangChain".
Upon tracing the source code, it was discovered that ChatOpenAI inherited BaseChatModel, which does not support the caching logic.
If the type of LLM switched from ChatOpenAI to ChatGPT, caching will be effective. Therefore, using ChatOpenAI in the LangChain Document example is incorrect, and should be replaced with ChatGPT. The corresponding address is: https://python.langchain.com/docs/modules/model_io/models/chat/how_to/chat_model_caching.
### Idea or request for content:
It is suggested to modify the example and replace ChatOpenAI with ChatGPT.
llm = ChatGPT() | DOC: The LLM type given in the Caching section is incorrect. It should be changed from ChatOpenAI to ChatGPT | https://api.github.com/repos/langchain-ai/langchain/issues/7129/comments | 3 | 2023-07-04T07:48:45Z | 2023-10-16T16:06:34Z | https://github.com/langchain-ai/langchain/issues/7129 | 1,787,364,347 | 7,129 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
I have been working on a side project using Pinecone and discovered that there are a bunch of useful Pinecone features in Langchain that are not mentioned in Langchain's documentation. [https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/pinecone](https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/pinecone)
### Idea or request for content:
I suggest adding to the documentation to mention the other functions of Langchain's Pinecone integration. A sentence or two about the use cases of each function could also be helpful too. :+1:
P.S. I would not mind implementing this change, would be a first good issue :) | DOC: Pinecone documentation missing features | https://api.github.com/repos/langchain-ai/langchain/issues/7126/comments | 0 | 2023-07-04T06:58:58Z | 2023-07-05T19:18:03Z | https://github.com/langchain-ai/langchain/issues/7126 | 1,787,284,168 | 7,126 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Here is my code where i use Faiss for vectorestore but it is not capable to giving a response from my pdf data where i am doing wrong please correct me.
def chat_bot_langchain():
from langchain.llms import OpenAI
index = faiss.read_index("faisss_audien_new.index")
print(index)
with open("faisss_audien_new.pkl", "rb") as f:
store = pickle.load(f)
store.index = index
retriever = store.as_retriever(search_kwargs=dict(k=3))
memory = ConversationBufferMemory(retriever=retriever)
llm = OpenAI(temperature=0, max_tokens=1500, model_name='gpt-3.5-turbo') # Can be any valid LLM
_DEFAULT_TEMPLATE = """
You are having a conversation with a human your name is Kira and your age is 25 year old. Please interact naturally
and avoid mentioning that you are an AI language model.\n
You also a human being. provides lots of specific details from its context.\n
Relevant pieces of previous conversation:
{history}
(You do not need to use these pieces of information if not relevant)
Current conversation:
User: {input}
Kira AI:
"""
PROMPT = PromptTemplate(
input_variables=["history", "input"], template=_DEFAULT_TEMPLATE
)
conversation_with_summary = ConversationChain(
llm=llm,
prompt=PROMPT,
memory=memory,
verbose=True
)
while True:
user_input = input("> ")
ai_response = conversation_with_summary.predict(input=user_input)
print("\nAssistant:\n", ai_response, "\n")
# conversation_with_summary.predict(input="")
chat_bot_langchain()
### Suggestion:
_No response_ | ConversationalBufferMemory is not working with my Data | https://api.github.com/repos/langchain-ai/langchain/issues/7121/comments | 1 | 2023-07-04T06:02:36Z | 2023-10-12T16:07:37Z | https://github.com/langchain-ai/langchain/issues/7121 | 1,787,208,532 | 7,121 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Getting AttributeError when using chroma.from_documents
my code: db = chroma.from_documents(texts, embeddings, persist_directory=persist_directory,
client_settings=CHROMA_SETTINGS)
AttributeError: module 'langchain.vectorstores.chroma' has no attribute 'from_documents
### Suggestion:
Please help resolve this error | Issue: from_documents error | https://api.github.com/repos/langchain-ai/langchain/issues/7119/comments | 2 | 2023-07-04T03:25:31Z | 2023-10-12T16:07:42Z | https://github.com/langchain-ai/langchain/issues/7119 | 1,787,076,435 | 7,119 |
[
"hwchase17",
"langchain"
]
| ### System Info
`from_texts` in `ElasticKnnSearch` is not creating a new index.
`add_texts` is not creating the correct mapping.
There were class instances of both these methods at one point, but they [got removed accidentally](https://github.com/hwchase17/langchain/pull/5569/commits/98f5038b1a6a6ee6f3108f95b27408ca23901724#) during some commit.
I will add them back to the `ElasticKnnSearch` with the correct mapping and functions
### Who can help?
@jeffvea
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Running
```
# Test `add_texts` method
texts = ["Hello, world!", "Machine learning is fun.", "I love Python."]
knn_search.add_texts(texts)
```
Incorrectly creates an index with dense_vector type and index:false
Running
```
# Test `from_texts` method
new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is fun."]
knn_search.from_texts(new_texts, dims=768)
```
throws an error about not having a keyword arg for embeddings
### Expected behavior
Correctly throw an exception when index has not been previously created.
```
# Test `add_texts` method
texts = ["Hello, world!", "Machine learning is fun.", "I love Python."]
knn_search.add_texts(texts)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/runner/langchain-1/langchain/vectorstores/elastic_vector_search.py", line 621, in add_texts
raise Exception(f"The index '{self.index_name}' does not exist. If you want to create a new index while encoding texts, call 'from_texts' instead.")
Exception: The index 'knn_test_index_012' does not exist. If you want to create a new index while encoding texts, call 'from_texts' instead.
```
Correctly create new index
```
# Test `from_texts` method
new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is fun."]
knn_search.from_texts(new_texts, dims=768)
```
The mapping is as follows:
```
{
"knn_test_index_012": {
"mappings": {
"properties": {
"text": {
"type": "text"
},
"vector": {
"type": "dense_vector",
"dims": 768,
"index": true,
"similarity": "dot_product"
}
}
}
}
}
```
Correctly index texts after index has been created
```
knn_search.add_texts(texts)
```
| ElasticKnnSearch not creating mapping correctly | https://api.github.com/repos/langchain-ai/langchain/issues/7117/comments | 1 | 2023-07-04T01:52:06Z | 2023-07-28T05:00:22Z | https://github.com/langchain-ai/langchain/issues/7117 | 1,787,013,125 | 7,117 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain: 0.0.222
OS: Windows 11+WSL Ubuntu22
I run a simple test according to the agent document guide page:
https://python.langchain.com/docs/modules/agents/
I just changed the input a little bit, it throw an error about the output parser.
The code is below:
=========================
llm = langchain.chat_models.ChatOpenAI(model_name="gpt-3.5-turbo-16k-0613", temperature=0)
tools = langchain.agents.load_tools(
["serpapi", "llm-math"],
llm=llm
)
agentexecutor = langchain.agents.initialize_agent(
tools=tools,
llm=llm,
agent=langchain.agents.AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION
)
results = agentexecutor.run("What is the number that the age of Donald Trump's wife in year of 2020 to power to 3?")
print(results)
=========================
The error is below:
=========================
[chain/error] [1:RunTypeEnum.chain:AgentExecutor] [40.48s] Chain run errored with error:
"OutputParserException('Could not parse LLM output: I now know the final answer.')"
=========================
The full log file attached here.
[agent_bug20230704.log.txt](https://github.com/hwchase17/langchain/files/11943427/agent_bug20230704.log.txt)
### Who can help?
Agent
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The code is below:
=========================
llm = langchain.chat_models.ChatOpenAI(model_name="gpt-3.5-turbo-16k-0613", temperature=0)
tools = langchain.agents.load_tools(
["serpapi", "llm-math"],
llm=llm
)
agentexecutor = langchain.agents.initialize_agent(
tools=tools,
llm=llm,
agent=langchain.agents.AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION
)
results = agentexecutor.run("What is the number that the age of Donald Trump's wife in year of 2020 to power to 3?")
print(results)
=========================
The error is below:
=========================
[chain/error] [1:RunTypeEnum.chain:AgentExecutor] [40.48s] Chain run errored with error:
"OutputParserException('Could not parse LLM output: I now know the final answer.')"
### Expected behavior
The result should be parsed properly. | The agent run output parser cause error when run a simple quick start | https://api.github.com/repos/langchain-ai/langchain/issues/7116/comments | 1 | 2023-07-04T01:32:55Z | 2023-07-04T02:38:26Z | https://github.com/langchain-ai/langchain/issues/7116 | 1,787,000,606 | 7,116 |
[
"hwchase17",
"langchain"
]
| ### System Info
I have been using STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION with a custom tool that takes 2 inputs, however have not been able to get the agent to produce the outputs we had before. Specifically, we're getting the intermediate, custom function input as output.
Ie instead of getting a value associated with a query "What is the `purchase total` for customer 5432 on 07-03-2023?"
We are now getting
```
{
"action": "database_tool",
"action_input": {
"customer_id": "5432",
"local_date": "2023-07-03"
}
}
```
This did not occur before this weekend. Here's more code snippet in the below example
### Who can help?
@hwcha
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I also tried simplifying the agent call, etc to no avail:
```
llm = ChatOpenAI(
temperature=0,
openai_api_key=openai.api_key
)
memory = ConversationBufferWindowMemory(
memory_key='chat_history',
k=3,
return_messages=True,
input_key='input',
output_key="output"
)
@tool(return_direct=False)
def database_tool(customer_id, local_date) -> str:
"""Useful when questions are asked about specific customer_id details,
particularly recent ones.
If date is not provided, it will default to today's date in yyyy-mm-dd format.
Format your input using the following template.
```
{{
"action": "database_action",
"action_input": {{"customer_id": "<customer id>", "local_date": "<date in yyyy-mm-dd format>"}}
}}
```
"""
db_query = """<QUERY THAT WORKS>
""".format(customer_id=customer_id, local_date=local_date)
formatted_d = get_data.query_database(query=db_query)
return formatted_d
conversational_agent = initialize_agent(
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
tools=[database_tool],
llm=llm,
verbose=True,
early_stopping_method='generate',
memory=memory,
SystemAgentPromptTemplate=prompt_template+"\n The only tool available is the database tool.",
return_intermediate_steps=True,
return_source_documents=True,
handle_parsing_errors='Check your output and make sure there is an equal number of "{" and "}"'
)
response = conversational_agent("What is the `purchase total` for customer 5432 on 07-03-2023?")
print(response['output'])
```
### Expected behavior
"The purchase total for customer 5432 is 59.55"
I should note the following also works; it's just the agent integration that's problematic:
n = {
"customer_id": "5432",
"local_date": "2023-07-03"
}
database_tool(n) | STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION Custom Tools Failure | https://api.github.com/repos/langchain-ai/langchain/issues/7108/comments | 5 | 2023-07-03T22:07:55Z | 2024-03-26T10:27:30Z | https://github.com/langchain-ai/langchain/issues/7108 | 1,786,864,012 | 7,108 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
When I run the command
```bash
make coverage
```
I get the following error:
```bash
collected 1556 items / 9 errors
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/_pytest/config/__init__.py", line 1560, in getoption
INTERNALERROR> val = getattr(self.option, name)
INTERNALERROR> AttributeError: 'Namespace' object has no attribute 'only_extended'
INTERNALERROR>
INTERNALERROR> The above exception was the direct cause of the following exception:
INTERNALERROR>
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/_pytest/main.py", line 269, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/_pytest/main.py", line 322, in _main
INTERNALERROR> config.hook.pytest_collection(session=session)
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/pluggy/_hooks.py", line 265, in __call__
INTERNALERROR> return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/pluggy/_manager.py", line 80, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/pluggy/_callers.py", line 60, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/pluggy/_result.py", line 60, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/pluggy/_callers.py", line 39, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/_pytest/main.py", line 333, in pytest_collection
INTERNALERROR> session.perform_collect()
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/_pytest/main.py", line 668, in perform_collect
INTERNALERROR> hook.pytest_collection_modifyitems(
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/pluggy/_hooks.py", line 265, in __call__
INTERNALERROR> return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/pluggy/_manager.py", line 80, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/pluggy/_callers.py", line 60, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/pluggy/_result.py", line 60, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/pluggy/_callers.py", line 39, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/home/mohtashimkhan/langchain/tests/unit_tests/conftest.py", line 43, in pytest_collection_modifyitems
INTERNALERROR> only_extended = config.getoption("--only-extended") or False
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/_pytest/config/__init__.py", line 1571, in getoption
INTERNALERROR> raise ValueError(f"no option named {name!r}") from e
INTERNALERROR> ValueError: no option named 'only_extended'
```
I am not sure what is the root cause of this issue. I created a new conda environment and installed poetry test,test_integration and main dependencies from scratch.
### Versions:
- python: 3.9
- poetry: 1.5.1
- make: 4.3
- OS: Ubuntu 22.04.1 LTS
### Suggestion:
_No response_ | Issue: Error when running `make coverage` | https://api.github.com/repos/langchain-ai/langchain/issues/7100/comments | 4 | 2023-07-03T21:10:50Z | 2023-12-20T16:07:23Z | https://github.com/langchain-ai/langchain/issues/7100 | 1,786,816,546 | 7,100 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.