issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
]
| I am trying to follow the [MRKL chat example](https://langchain.readthedocs.io/en/latest/modules/agents/implementations/mrkl_chat.html) but with AzureOpenAI (`text-davinci-003`) and AzureChatOpenAI (`gpt-3.5-turbo`). However, I am running into this error:
```
ValueError: `stop` found in both the input and default params.
```
I have confirmed that AzureOpenAI and AzureChatOpenAI work independently from the agent:
<img width="992" alt="image" src="https://user-images.githubusercontent.com/404062/226631921-e1c85c76-0bdf-4415-9cae-7149eced7f0f.png">
I am using langchain version 0.0.117.
Seems related to this WIP PR: https://github.com/hwchase17/langchain/pull/1817 | ValueError when using Azure and chat-zero-shot-react-description agent | https://api.github.com/repos/langchain-ai/langchain/issues/1852/comments | 3 | 2023-03-21T14:10:21Z | 2023-09-26T16:14:15Z | https://github.com/langchain-ai/langchain/issues/1852 | 1,634,012,102 | 1,852 |
[
"hwchase17",
"langchain"
]
| The role attribute in line 616 of openai.py under the llms package of langchain is expected to provide an external modification entry or parameter.
<img width="1146" alt="image" src="https://user-images.githubusercontent.com/29686094/226600002-ba0a89fd-c65a-4d8c-92a0-73e7a44dfdb9.png">
Here are the parameter values of role when calling the model from the openai website.
<img width="239" alt="1774af7245061cda5320201e6529484e" src="https://user-images.githubusercontent.com/29686094/226600059-a6ae29d8-ac78-47ad-9542-9bf60cacd018.png">
| I would like to provide an entry for role=system | https://api.github.com/repos/langchain-ai/langchain/issues/1848/comments | 4 | 2023-03-21T12:02:49Z | 2023-09-18T16:23:10Z | https://github.com/langchain-ai/langchain/issues/1848 | 1,633,770,590 | 1,848 |
[
"hwchase17",
"langchain"
]
| ```python
import os
os.environ["OPENAI_API_KEY"] = "..."
from langchain.agents import Tool, load_tools
from langchain.memory import ConversationBufferMemory
from langchain import OpenAI
from langchain.utilities import GoogleSearchAPIWrapper
from langchain.agents import initialize_agent
tool_names = []
tool_names.append("python_repl")
llm=OpenAI(temperature=0.1,model_name="gpt-3.5-turbo")
tools = load_tools(tool_names=tool_names, llm=llm)
memory = ConversationBufferMemory(memory_key="chat_history")
agent_chain = initialize_agent(tools, llm, agent="conversational-react-description", verbose=True, memory=memory)
default_arg = "Hi, how may I help you?"
if len(sys.argv) < 2:
arg= default_arg
else:
arg = sys.argv[1]
while(True):
agent_chain.run(input=arg)
arg = input("\nHi, how may I help you?\n")
```
I run
```shell
python3 main_21_03.py "compute 5 * 6 using python"
```
And I get a loop where the agent is running the right command, but its observation is empty:
```shell
compute 5 * 6 using Python
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: Python REPL
Action Input: 5 * 6
Observation:
Thought:Do I need to use a tool? Yes
Action: Python REPL
Action Input: 5 * 6
Observation:
[etc...]
``` | Empty observation when using python_repl | https://api.github.com/repos/langchain-ai/langchain/issues/1846/comments | 2 | 2023-03-21T09:21:58Z | 2023-10-30T16:08:08Z | https://github.com/langchain-ai/langchain/issues/1846 | 1,633,506,456 | 1,846 |
[
"hwchase17",
"langchain"
]
|
```
agent = create_csv_agent(llm, 'titanic.csv', verbose=True)
json_agent_executor = create_json_agent(llm, toolkit=json_toolkit, verbose=True)
openapi_agent_executor = create_openapi_agent(
llm=OpenAI(temperature=0),
toolkit=openapi_toolkit,
verbose=True
)
agent_executor = create_sql_agent(
llm=OpenAI(temperature=0),
toolkit=toolkit,
verbose=True
)
agent_executor = create_vectorstore_agent(
llm=llm,
toolkit=toolkit,
verbose=True
)
```
```
agent_executor = create_python_agent(llm, tool=PythonREPLTool(),verbose=True)
```
if we can make the second parameter with toolkit, then the python agent can keep consistency with other agent.
is there any extra consideration for this API when design? | make the `create_python_agent` API more consistency with other same API | https://api.github.com/repos/langchain-ai/langchain/issues/1845/comments | 2 | 2023-03-21T06:44:54Z | 2023-09-10T16:41:13Z | https://github.com/langchain-ai/langchain/issues/1845 | 1,633,317,706 | 1,845 |
[
"hwchase17",
"langchain"
]
| Using `VectorDBQAWithSourcesChain` with arun, facing below issue
`ValueError: `run` not supported when there is not exactly one output key. Got ['answer', 'sources'].` | Facing issue when using arun with VectorDBQAWithSourcesChain chain | https://api.github.com/repos/langchain-ai/langchain/issues/1844/comments | 19 | 2023-03-21T05:59:09Z | 2023-10-19T16:09:34Z | https://github.com/langchain-ai/langchain/issues/1844 | 1,633,279,135 | 1,844 |
[
"hwchase17",
"langchain"
]
| Hey there, just asking on what the progress is on the "Custom Agent Class" for langchain? 🙂
https://langchain.readthedocs.io/en/latest/modules/agents/examples/custom_agent.html#custom-agent-class | Progress on Custom Agent Class? | https://api.github.com/repos/langchain-ai/langchain/issues/1840/comments | 6 | 2023-03-21T02:06:22Z | 2023-09-28T16:10:58Z | https://github.com/langchain-ai/langchain/issues/1840 | 1,633,115,886 | 1,840 |
[
"hwchase17",
"langchain"
]
| I have 3 pdf files in my directory and I "documentized", added metadata, split, embed and store them in pinecone, like this:
```
loader = DirectoryLoader('data/dir', glob="**/*.pdf", loader_cls=UnstructuredPDFLoader)
data = loader.load()
#I added company names explicitly for now
data[0].metadata["company"]="Apple"
data[1].metadata["company"]="Miscrosoft"
data[2].metadata["company"]="Tesla"
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=200)
texts = text_splitter.split_documents(data)
embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY)
pinecone.init(
api_key=PINECONE_API_KEY,
environment=PINECONE_API_ENV
)
metadatas = []
for text in texts:
metadatas.append({
"company": text.metadata["company"]
})
Pinecone.from_texts([t.page_content for t in texts], embeddings, index_name=index_name, metadatas=metadatas)
```
I want to build a Q&A system, so that I will mention a company name in my query and pinecon should look for the documents having company `A` in the metadata. Here what I have:
```
pinecone.init(
api_key=PINECONE_API_KEY,
environment=PINECONE_API_ENV
)
index_name = "index"
embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY)
docsearch = Pinecone.from_existing_index(index_name=index_name, embedding=embeddings)
llm = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY)
chain = load_qa_chain(llm, chain_type="stuff")
query = "What is the total revenue of Apple?"
docs = docsearch.similarity_search(query, include_metadata=True)
res = chain.run(input_documents=docs, question=query)
print(res)
```
However, there are still document chunks from non-Apple documents in the output of `docs`. What am I doing wrong here and how do I utilize the information in metadata both on doc_search and chat-gpt query (If possible)? Thanks | How metadata is being used during similarity search and query? | https://api.github.com/repos/langchain-ai/langchain/issues/1838/comments | 10 | 2023-03-21T01:32:20Z | 2024-03-27T12:24:17Z | https://github.com/langchain-ai/langchain/issues/1838 | 1,633,096,854 | 1,838 |
[
"hwchase17",
"langchain"
]
| 1. Cannot initialize match chain with ChatOpenAI LLM
llm_math = LLMMathChain(llm=ChatOpenAI(temperature=0))
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[33], line 1
----> 1 llm_math = LLMMathChain(llm=ChatOpenAI(temperature=0))
File ~/anaconda3/envs/gpt_index/lib/python3.8/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for LLMMathChain
llm
Can't instantiate abstract class BaseLLM with abstract methods _agenerate, _generate, _llm_type (type=type_error)
2. Works ok with OpenAI LLM
llm_math = LLMMathChain(llm=OpenAI(temperature=0))
| LLMMathChain to allow ChatOpenAI as an llm | https://api.github.com/repos/langchain-ai/langchain/issues/1834/comments | 10 | 2023-03-20T23:12:24Z | 2023-04-29T21:57:59Z | https://github.com/langchain-ai/langchain/issues/1834 | 1,633,003,060 | 1,834 |
[
"hwchase17",
"langchain"
]
| Hi! I tried implementing the docs from [here](https://langchain.readthedocs.io/en/latest/modules/utils/examples/zapier.html) but am running into this issue – is it due to openAI's API being down?
```
> Entering new AgentExecutor chain...
I need to find the email, summarize it, and send it to slack.
Action: Gmail: Find Email
Action Input: Find the last email I received regarding Silicon Valley BankTraceback (most recent call last):
File "/Users/joship/Desktop/gptops/ops_gpt.py", line 21, in <module>
agent.run("Summarize the last email I received regarding Silicon Valley Bank. Send the summary to slack.")
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py", line 213, in run
return self(args[0])[self.output_keys[0]]
^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py", line 116, in __call__
raise e
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/chains/base.py", line 113, in __call__
outputs = self._call(inputs)
^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/agents/agent.py", line 505, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/agents/agent.py", line 423, in _take_next_step
observation = tool.run(
^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/tools/base.py", line 71, in run
raise e
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/tools/base.py", line 68, in run
observation = self._run(tool_input)
^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/tools/zapier/tool.py", line 121, in _run
return self.api_wrapper.run_as_str(self.action_id, instructions, self.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/utilities/zapier.py", line 141, in run_as_str
data = self.run(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/utilities/zapier.py", line 121, in run
response.raise_for_status()
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://nla.zapier.com/api/v1/exposed/01GW01TC6DKAN899S3SGKN2MQ3/execute/?
``` | Implementing Zapier example | https://api.github.com/repos/langchain-ai/langchain/issues/1832/comments | 1 | 2023-03-20T22:29:41Z | 2023-09-10T16:41:19Z | https://github.com/langchain-ai/langchain/issues/1832 | 1,632,960,593 | 1,832 |
[
"hwchase17",
"langchain"
]
| I tried following [these docs](https://langchain.readthedocs.io/en/latest/modules/utils/examples/zapier.html) to import Zapier integration
```
from langchain.tools.zapier.tool import ZapierNLARunAction
from langchain.utilities.zapier import ZapierNLAWrapper
```
But I'm getting these errors:
`Traceback (most recent call last):
File "/Users/joship/Desktop/gptops/ops_gpt.py", line 15, in <module>
from langchain.utilities import Zapier
ImportError: cannot import name 'Zapier' from 'langchain.utilities' (/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/utilities/__init__.py)`
This is after running `pip3 install 'langchain[all]'` is there somthing I'm missing when it comes to nested modules? | Importing Zapier | https://api.github.com/repos/langchain-ai/langchain/issues/1831/comments | 1 | 2023-03-20T21:50:36Z | 2023-03-20T22:28:18Z | https://github.com/langchain-ai/langchain/issues/1831 | 1,632,916,808 | 1,831 |
[
"hwchase17",
"langchain"
]
| **Issue:** When trying to read data from some URLs, I get a 403 error during load. I assume this is due to the web-server not allowing all user agents.
**Expected behavior:** It would be great if I could specify a user agent (e.g. standard browsers like Mozilla, maybe also Google bots) for making the URL requests.
**My code**
```
from langchain.document_loaders import UnstructuredURLLoader
urls = ["https://dsgvo-gesetz.de/art-1"]
loader = UnstructuredURLLoader(urls=urls)
data = loader.load()
```
**Error message**
```
ValueError Traceback (most recent call last)
Cell In[62], line 1
----> 1 data = loader.load()
File /opt/conda/lib/python3.10/site-packages/langchain/document_loaders/url.py:28, in UnstructuredURLLoader.load(self)
26 docs: List[Document] = list()
27 for url in self.urls:
---> 28 elements = partition_html(url=url)
29 text = "\n\n".join([str(el) for el in elements])
30 metadata = {"source": url}
File /opt/conda/lib/python3.10/site-packages/unstructured/partition/html.py:72, in partition_html(filename, file, text, url, encoding, include_page_breaks, include_metadata, parser)
70 response = requests.get(url)
71 if not response.ok:
---> 72 raise ValueError(f"URL return an error: {response.status_code}")
74 content_type = response.headers.get("Content-Type", "")
75 if not content_type.startswith("text/html"):
ValueError: URL return an error: 403
```
**for reference: URL that works without the 403 error**
```https://www.heise.de/newsticker/``` | UnstructuredURLLoader Error 403 | https://api.github.com/repos/langchain-ai/langchain/issues/1829/comments | 9 | 2023-03-20T21:26:40Z | 2023-06-19T00:47:02Z | https://github.com/langchain-ai/langchain/issues/1829 | 1,632,888,908 | 1,829 |
[
"hwchase17",
"langchain"
]
| The current documentation https://langchain.readthedocs.io/en/latest/modules/agents/getting_started.html seems to not be up to date with version 0.0.117:
```shell
UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`
```
EDIT: to be more precise, this only happens if I try to change model_name to "gpt-3.5-turbo" | Documentation not up to date | https://api.github.com/repos/langchain-ai/langchain/issues/1827/comments | 12 | 2023-03-20T18:10:51Z | 2024-02-16T16:10:11Z | https://github.com/langchain-ai/langchain/issues/1827 | 1,632,628,572 | 1,827 |
[
"hwchase17",
"langchain"
]
|
I am experiencing an issue with Chroma:
`Chroma.from_texts(texts=chunks, embedding=embeddings, persist_directory=config.PERSIST_DIR, metadatas=None)`
opt/anaconda3/lib/python3.8/site-packages/langchain/vectorstores/chroma.py", line 27, in <listcomp>
(Document(page_content=result[0], metadata=result[1]), result[2])
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for Document
metadata
none is not an allowed value (type=type_error.none.not_allowed)
Thanks for all the help! | Validation error- Metadata should not be empty or None | https://api.github.com/repos/langchain-ai/langchain/issues/1825/comments | 2 | 2023-03-20T17:50:04Z | 2023-09-10T16:41:24Z | https://github.com/langchain-ai/langchain/issues/1825 | 1,632,592,766 | 1,825 |
[
"hwchase17",
"langchain"
]
| If I'm reading correctly, this is the function to add_texts to Chroma
```
def add_texts(
self,
texts: Iterable[str],
metadatas: Optional[List[dict]] = None,
ids: Optional[List[str]] = None,
**kwargs: Any,
) -> List[str]:
"""Run more texts through the embeddings and add to the vectorstore.
Args:
texts (Iterable[str]): Texts to add to the vectorstore.
metadatas (Optional[List[dict]], optional): Optional list of metadatas.
ids (Optional[List[str]], optional): Optional list of IDs.
Returns:
List[str]: List of IDs of the added texts.
"""
# TODO: Handle the case where the user doesn't provide ids on the Collection
if ids is None:
ids = [str(uuid.uuid1()) for _ in texts]
embeddings = None
if self._embedding_function is not None:
embeddings = self._embedding_function.embed_documents(list(texts))
self._collection.add(
metadatas=metadatas, embeddings=embeddings, documents=texts, ids=ids
)
return ids
```
It does not seem to check if the texts are already inside the database. This means it's very easy to duplicate work when running indexing jobs incrementally. What's more, the Chroma class from langchain.vectorstores does not seem to expose functions to see if some text is already inside the vector store.
What's the preferred way of dealing with this? I can of course set up a separate db that keeps track of hashes of text inside the Chromadb, but this seems unnecessarily clunky and something that you'd expect the db to do for you.
| Avoiding recomputation of embeddings with Chroma | https://api.github.com/repos/langchain-ai/langchain/issues/1824/comments | 9 | 2023-03-20T17:48:30Z | 2023-09-28T16:11:03Z | https://github.com/langchain-ai/langchain/issues/1824 | 1,632,590,839 | 1,824 |
[
"hwchase17",
"langchain"
]
| Hi,
I would like to contribute to LangChain, need to know if our feature is relevant as part of LangChain.
Would like check with you in private. How we can share our idea?
Moshe | New feature: Contribution | https://api.github.com/repos/langchain-ai/langchain/issues/1823/comments | 1 | 2023-03-20T17:03:16Z | 2023-09-10T16:41:29Z | https://github.com/langchain-ai/langchain/issues/1823 | 1,632,522,870 | 1,823 |
[
"hwchase17",
"langchain"
]
| Creating and using AzureChatOpenAI directly works fine, but crashing through ChatVectorDBChain with "ValueError: Should always be something for OpenAI."
Example:
```
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains import ChatVectorDBChain
from langchain.document_loaders import PagedPDFSplitter
from langchain.chat_models import AzureChatOpenAI
from langchain.prompts.chat import ChatPromptTemplate, SystemMessagePromptTemplate, AIMessagePromptTemplate, HumanMessagePromptTemplate
from langchain.schema import AIMessage, HumanMessage, SystemMessage
system_template="""Use the following pieces of context to answer the users question.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
----------------
{context}"""
messages = [
SystemMessagePromptTemplate.from_template(system_template),
HumanMessagePromptTemplate.from_template("{question}")
]
prompt = ChatPromptTemplate.from_messages(messages)
loader = PagedPDFSplitter("myfile.pdf")
text_splitter = CharacterTextSplitter(chunk_size=700, chunk_overlap=50)
documents = loader.load_and_split(text_splitter)
embeddings = OpenAIEmbeddings(chunk_size=1)
vectorstore = FAISS.from_documents(documents, embeddings)
qa = ChatVectorDBChain.from_llm(AzureChatOpenAI(temperature=0, deployment_name='gpt-35-turbo'), vectorstore, qa_prompt=prompt, top_k_docs_for_context=2)
## All good to here, but following line crashes:
result = qa({"question": "what is section 2 about?", "chat_history": []})
```
Crashes with:
ValueError: Should always be something for OpenAI.
This works fine on ChatOpenAI but not AzureChatOpenAI. | Creating and using AzureChatOpenAI directly works fine, but crashing through ChatVectorDBChain with "ValueError: Should always be something for OpenAI." | https://api.github.com/repos/langchain-ai/langchain/issues/1822/comments | 2 | 2023-03-20T16:27:16Z | 2023-03-20T16:47:11Z | https://github.com/langchain-ai/langchain/issues/1822 | 1,632,466,855 | 1,822 |
[
"hwchase17",
"langchain"
]
| Hey team,
i have created an API in the below format
`
@app.post("/get_answer")
async def get_answer(user_query):
return vector_db.qa(user_query)
`
And I call this function for 5 user_query, but it only able to process first query other query faces issues from OpenAI. Is calling OpenAI API parallely not allowed at all?
| async call for question answering | https://api.github.com/repos/langchain-ai/langchain/issues/1816/comments | 1 | 2023-03-20T13:59:50Z | 2023-09-10T16:41:33Z | https://github.com/langchain-ai/langchain/issues/1816 | 1,632,175,081 | 1,816 |
[
"hwchase17",
"langchain"
]
| During the tracing of ChatVectorDBChain, even though it shows source_documents but when clicked on Explore, source_documents suddenly becomes [object Object],[object Object].
Here is an example:

and then after clicking on explore button:

| During the tracing of ChatVectorDBChain, even though it shows source_documents but when clicked on Explore, source_documents suddenly becomes [object Object],[object Object] | https://api.github.com/repos/langchain-ai/langchain/issues/1815/comments | 3 | 2023-03-20T10:58:37Z | 2023-09-10T16:41:40Z | https://github.com/langchain-ai/langchain/issues/1815 | 1,631,865,900 | 1,815 |
[
"hwchase17",
"langchain"
]
| OpenAI has been fairly unstable with keeping up the load after back to back releases and it tends to fail at some requests.
now If we're embedding a big document in chunks, it tends to fail at some point and Pinecone.from_texts() does not have exception handling, on failure of a chunk whole document that is left is wasted.
<img width="641" alt="image" src="https://user-images.githubusercontent.com/64721638/226257394-76ba84f7-2801-49e9-9261-b4a4550a4660.png">
- we should use exponential retry or some better exception handling so that whole document does not crashes.
please let me know if I can work on the issue and submit a PR | Need better exception Handling at Pinecone.from_texts() | https://api.github.com/repos/langchain-ai/langchain/issues/1811/comments | 1 | 2023-03-20T05:48:55Z | 2023-09-10T16:41:44Z | https://github.com/langchain-ai/langchain/issues/1811 | 1,631,430,634 | 1,811 |
[
"hwchase17",
"langchain"
]
| Error Message:

Usage:
I currently using **AzureChatOpenAI** with below parameters

But it is not taking the value from parameter and throwing error. | AzureChatOpenAI failed to accept openai_api_base, throwing error | https://api.github.com/repos/langchain-ai/langchain/issues/1810/comments | 1 | 2023-03-20T05:44:48Z | 2023-09-10T16:41:49Z | https://github.com/langchain-ai/langchain/issues/1810 | 1,631,427,537 | 1,810 |
[
"hwchase17",
"langchain"
]
| While I am trying to rebuild chat_pdf based on mayo's example.
I noticed that vector store with pinecone doesn't respond with similar docs when it performs similarity_search function.
I test it against other vector provider like faiss and chroma. In both cases, those all works.
Here is the [code link](https://github.com/butzhang/simple_chat_pdf/blob/main/simple_chat_pdf/components/question_handler.py#L28-L30)
Steps to reproduce.
"""
from simple_chat_pdf.components.question_handler import QuestionHandler
question = 'what is this legal case about'
r = QuestionHandler().get_answer(question=question, chat_history=[])
"""
You can switch from different vector and see only pinecone provide failed and didn't find similar docs
you need to switch to your open_api key because mine might expire
| issue with pinecone similarity_search function | https://api.github.com/repos/langchain-ai/langchain/issues/1809/comments | 2 | 2023-03-20T05:13:20Z | 2023-10-27T16:09:24Z | https://github.com/langchain-ai/langchain/issues/1809 | 1,631,399,350 | 1,809 |
[
"hwchase17",
"langchain"
]
| Lots of customers is asking if langchain have a document loader like AWS S3 or GCS for Azure Blob Storage as well. As you know Microsoft is a big partner for OpenAI , so there is a real need to have native document loader for Azure Blob storage as well. We will be very happy to see this feature ASAP. | Document loader for Azure Blob storage | https://api.github.com/repos/langchain-ai/langchain/issues/1805/comments | 3 | 2023-03-20T02:39:16Z | 2023-03-27T15:17:18Z | https://github.com/langchain-ai/langchain/issues/1805 | 1,631,276,872 | 1,805 |
[
"hwchase17",
"langchain"
]
| `poetry install -E all` fails with Poetry >=1.4.0 due to upstream incompatibility between `poetry>=1.4.0` and `pydata_sphinx_theme`.
This is a tracking issue. I've already created an issue upstream here: https://github.com/pydata/pydata-sphinx-theme/issues/1253 | Poetry 1.4.0 installation fails | https://api.github.com/repos/langchain-ai/langchain/issues/1801/comments | 2 | 2023-03-19T23:42:55Z | 2023-09-12T21:30:13Z | https://github.com/langchain-ai/langchain/issues/1801 | 1,631,163,256 | 1,801 |
[
"hwchase17",
"langchain"
]
| Hi, does anyone know how to override the prompt template of ConversationChain? I am creating a custom prompt template that takes in an additional input variable
```
PROMPT_TEMPLATE = """ {my_info}
{history}
Human: {input}
AI:"""
PROMPT = PromptTemplate(
input_variables=["history", "input", "my_info"], template=PROMPT_TEMPLATE
)
conversation_chain = ConversationChain(
prompt=PROMPT,
llm=OpenAI(temperature=0.7),
verbose=True,
memory=ConversationBufferMemory()
)
```
but got the following error:
```
Got unexpected prompt input variables. The prompt expects ['history', 'input', 'my_info'], but got ['history'] as inputs from memory, and input as the normal input key. (type=value_error)
```
Is my understanding correct that currently ConversationChain can only support prompt template that takes in "history" and "input" as the input variables? | Error when overriding default prompt template of ConversationChain | https://api.github.com/repos/langchain-ai/langchain/issues/1800/comments | 27 | 2023-03-19T23:33:20Z | 2024-02-08T06:43:47Z | https://github.com/langchain-ai/langchain/issues/1800 | 1,631,160,642 | 1,800 |
[
"hwchase17",
"langchain"
]
| `ChatVectorDBChain` with `ChatOpenAI` fails with `Should always be something for OpenAI.`
It appears `_combine_llm_outputs` added to `ChatOpenAI` in v0.0.116 is receiving a `llm_outputs` list containing a `None` value. I think this is related to streaming responses from the OpenAI API.
see https://github.com/hwchase17/langchain/pull/1785 | v0.0.116 ChatVectorDBChain with ChatOpenAI fails with `Should always be something for OpenAI.` | https://api.github.com/repos/langchain-ai/langchain/issues/1799/comments | 2 | 2023-03-19T23:18:24Z | 2023-03-20T14:51:20Z | https://github.com/langchain-ai/langchain/issues/1799 | 1,631,156,471 | 1,799 |
[
"hwchase17",
"langchain"
]
| @hwchase17 Thank you for leading work on this repo! It's very clear you've put a lot of love into this project. 🤗 ❤️
My coworker, @3coins, with whom I work daily, is a regular contributor here.
I wanted to offer you a design proposal that I think would be a great addition to LangChain. Here goes:
# Motivation
LLM providers lack a unified interface for common use-cases:
- **Specifying a model ID to a model provider**
- Users lack a unified interface for specifying the model they wish to use when instantiating a model provider. `openai` and `openai-chat` both expect the model ID in `model_name`, but `ai21` and `anthropic` expect `model`.
- **Discovering supported models**
- Users don't have any way of determining valid model IDs for a given provider. For example, a user trying to invoke ChatGPT via `OpenAI(model_name="chatgpt")` will be quite confused, and there's no clear way for a user to look up what the allowed values are.
- The only solution for this now is to look up upstream API reference documentation on what models are allowed, which is very tedious.
- **Determining prerequisites**
- Users don't have any way of determining what packages they need without looking at the source code or encountering a runtime `ImportError`, even if they have knowledge of what providers they wish to use. There's no attribute/property on any of the model providers that lists what packages must be installed before using them.
- **Determining necessary authentication**
- Similar to the use-case above, users don't have any way of knowing in advance what authentication they should supply and how to supply them before using a provider.
We have run into this pain points while [integrating LangChain with Jupyter AI](https://github.com/jupyterlab/jupyter-ai/pull/18). This proposal, if accepted, would allow for applications to build on top of LangChain LLM providers much more easily.
# Proposal
I propose:
1. We formalize the notion of model IDs and provider IDs, and enforce this naming convention broadly throughout the codebase and documentation.
2. We implement a unified interface for provider interaction by expanding the definition of `BaseLanguageModel`:
```python
class EnvAuthStrategy(BaseModel):
type: Literal["env"] = 'env'
name: str
# for sagemaker_endpoint expecting creds at ~/.aws/credentials
class FileAuthStrategy(BaseModel):
type: Literal["file"] = 'file'
path: str
AuthStrategy = Union[EnvAuthStrategy, FileAuthStrategy]
class BaseLanguageModel(BaseModel, ABC):
# provider ID. this is currently bound to _llm_type()
id: str
# ID of the model to invoke by this provider
model_id: str
# List of supported models
# For registry providers, this will just be ["*"]
# See rest of the issue for explanation
models: List[str]
# List of package requirements
package_reqs: List[str]
# Authn/authz strategy
auth_strategy: AuthStrategy
...
```
Subclasses should override these fields and [make all of them constant](https://docs.pydantic.dev/usage/schema/#field-customization) with the exception of `model_id`. For example, the `cohere` provider might look something like this:
```python
class CohereAuthStrategy(EnvAuthStrategy):
name = "COHERE_API_KEY"
class Cohere(LLM, BaseModel):
id = Field("cohere", const=True)
# "medium" is just the default, still changeable at runtime
model_id = "medium"
# Cohere model provider supports any model available via
# `cohere.Client#generate()`.`
# Reference: https://docs.cohere.ai/reference/generate
models: Field(["medium", "xlarge"], const=True)
package_reqs: Field(["cohere"], const=True)
auth_strategy: Field(CohereAuthStrategy, const=True)
```
This strategy also handles registry providers (see "Terminology" section below) nicely. **A model ID can be defined as any identifier for an instance of a model**. This is best illustrated with a few examples:
- `huggingface_hub`: the HF repo ID, e.g. `gpt2` or `google/flan-t5-xxl`
- `huggingface_endpoint`: the URL to the HF endpoint, e.g. `https://foo.us-west-2.aws.endpoints.huggingface.cloud/bar`
- `sagemaker_endpoint`: the endpoint name. Your region and authentication credentials should already be specified via boto3.
While the syntax of model IDs is indeed very different for registry providers, if you think about it, it still functions exactly as a model ID: it identifies one and only one model hosted elsewhere.
Because registry providers have a dynamic, unknown, and very large set of valid model IDs, they declare their supported models using a wildcard like so:
```
models: Field(["*"], const=True)
```
# Next steps
Let's first get consensus on this proposal and assign the work later. I'd be happy to kick off the PR and define the types, but I'll likely need to loop in other contributors to assist due to the scope of this proposal. There will also need to be some work done on the documentation as well.
# Appendix
## Terminology
- **Model provider / LLM provider**: A class that can provide one or more models. In LangChain, these currently inherit `BaseLanguageModel`.
- **Registry provider**: A special subset of model providers that have a dynamic number of provided models.
- These include but are not limited to: `huggingface_hub`, `huggingface_endpoint`, `sagemaker_endpoint`
- These providers are unique in the sense that they do not have a static number of models they support. New models are being uploaded to HuggingFace Hub daily.
- We call these providers "registry providers" (since they mimic the behavior of a package registry, like NPM/PyPi)
- **Model**: A model provided by a model provider (forgive the circular working definition, models are quite difficult to define strictly). For example, `text-davinci-003` is a model provided by the `openai` model provider.
- **Model provider ID**: a string identifier for a model provider. Currently this is retrieved from the `_llm_type` property.
- **Model ID**: a string identifier for a model.
| [Design Proposal] Standardized interface for LLM provider usage | https://api.github.com/repos/langchain-ai/langchain/issues/1797/comments | 3 | 2023-03-19T22:28:09Z | 2023-09-10T16:41:54Z | https://github.com/langchain-ai/langchain/issues/1797 | 1,631,141,587 | 1,797 |
[
"hwchase17",
"langchain"
]
| The current design leaves only two options for initialization `openai-python`:
1. `OPENAI_API_KEY` environment variable
2. pass in `openai_api_key`
LangChain works just fine with the other `openai-python` settings like `openai.api_type`.
Can we make it so it uses `openai.api_key` if it is not `None`?
Happy to try a PR but wanted to understand design choice first.
https://github.com/hwchase17/langchain/blob/master/langchain/llms/openai.py#LL202C9-L202C29 | OpenAI base model initialization ignores currently set openi.api_key | https://api.github.com/repos/langchain-ai/langchain/issues/1796/comments | 1 | 2023-03-19T21:45:28Z | 2023-08-24T16:14:16Z | https://github.com/langchain-ai/langchain/issues/1796 | 1,631,129,215 | 1,796 |
[
"hwchase17",
"langchain"
]
| I'm using the pipeline for Q&A pipeline on non-english language:
```
pinecone.init(
api_key=PINECONE_API_KEY, # find at app.pinecone.io
environment=PINECONE_API_ENV # next to api key in console
)
index_name = "langchain2"
embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY)
docsearch = Pinecone.from_existing_index(index_name=index_name, embedding=embeddings)
llm = OpenAI(temperature=0, openai_api_key=OPENAI_API_KEY)
chain = load_qa_chain(llm, chain_type="stuff")
query = "QUESTION"
docs = docsearch.similarity_search(query, include_metadata=True)
res = chain.run(input_documents=docs, question=query)
print(res)
```
It stucks in `res = chain.run(input_documents=docs, question=query)` lane. I'm waiting for ~20 mins already. What's the reason for that and how to investigate?
---------------
**UPD**
After ~30 mins I got
```
Retrying langchain.llms.openai.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=600).
```
What's the reason for that and how can I fix it? | chain.run threw 403 error | https://api.github.com/repos/langchain-ai/langchain/issues/1795/comments | 6 | 2023-03-19T21:35:50Z | 2023-03-22T20:39:40Z | https://github.com/langchain-ai/langchain/issues/1795 | 1,631,126,760 | 1,795 |
[
"hwchase17",
"langchain"
]
| Within the Chroma DB, similarity_search has a default "k" of 4.
However, if there are less than 4 results to return to the query, it will crash instead of returning whatever is available.
```similarity_search(query: str, k: int = 4, filter: Optional[Dict[str, str]] = None, **kwargs: Any) → List[langchain.docstore.document.Document][[source]](https://langchain.readthedocs.io/en/latest/_modules/langchain/vectorstores/chroma.html#Chroma.similarity_search)
Run similarity search with Chroma.
Parameters
query (str) – Query text to search for.
k (int) – Number of results to return. Defaults to 4.
filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None.
Returns
List of documents most simmilar to the query text.
Return type
List[Document]
```
```
raise NotEnoughElementsException(
chromadb.errors.NotEnoughElementsException: Number of requested results 4 cannot be greater than number of elements in index 1
```
I believe a check is needed to return whatever is available whenever the results are < k.
Edit: You can set `k=1` for the search, but then you are artificially limiting yourself, without knowing what is available.
It really should check if the N results returned are <= K, and if so, instead of crashing it should return whatever is availab.e. | [BUG] Chroma DB - similarity_search - chromadb.errors.NotEnoughElementsException | https://api.github.com/repos/langchain-ai/langchain/issues/1793/comments | 16 | 2023-03-19T21:11:37Z | 2023-09-22T16:05:46Z | https://github.com/langchain-ai/langchain/issues/1793 | 1,631,119,009 | 1,793 |
[
"hwchase17",
"langchain"
]
| How to query from an existing index?
I filled up an index in Pinecode using:
```
docsearch = Pinecone.from_texts([t.page_content for t in texts], embeddings, index_name=index_name)
```
Now, I'm creating a separate `.py` file, where I need to use the existing index to query. As I understand I need to use the following function:
```
@classmethod
def from_existing_index(
cls,
index_name: str,
embedding: Embeddings,
text_key: str = "text",
namespace: Optional[str] = None,
) -> Pinecone:
"""Load pinecone vectorstore from index name."""
try:
import pinecone
except ImportError:
raise ValueError(
"Could not import pinecone python package. "
"Please install it with `pip install pinecone-client`."
)
return cls(
pinecone.Index(index_name), embedding.embed_query, text_key, namespace
)
```
but what are the arguments there? I know only my `index_name` . What are the rest arguments? Embeddings are embeddings from OpenAI, right?
e.g.:
```
embeddings = OpenAIEmbeddings(openai_api_key=OPENAI_API_KEY)
```
What is the `text_key`? And what `name_space` ? | Query on existing index | https://api.github.com/repos/langchain-ai/langchain/issues/1792/comments | 30 | 2023-03-19T20:55:14Z | 2024-07-06T10:06:53Z | https://github.com/langchain-ai/langchain/issues/1792 | 1,631,113,414 | 1,792 |
[
"hwchase17",
"langchain"
]
| # [BUG] OpenMetero prompt token count error
The OpenMetero api returns a json that is large and the GPT-3 api cannot study it to return information
## What and Why?
While running the OpenMetero Lang chain the open metero api returns a large json file that is then sent to the gpt-3 api to understand, which is larger than the token size thus returning errors.
## Suggestions?
Add another step to parse json using json chain | [BUG] OpenMetero prompt token count error | https://api.github.com/repos/langchain-ai/langchain/issues/1790/comments | 1 | 2023-03-19T18:40:49Z | 2023-08-24T16:14:21Z | https://github.com/langchain-ai/langchain/issues/1790 | 1,631,067,675 | 1,790 |
[
"hwchase17",
"langchain"
]
| Is there any chance to add support for selfhost models like ChatGLM or transformer models.
Tried to use runhouse with model “ChatGLM-6B”, but not working. | Add support for selfhost models like ChatGLM or transformer models | https://api.github.com/repos/langchain-ai/langchain/issues/1780/comments | 7 | 2023-03-19T16:33:47Z | 2023-09-19T23:30:45Z | https://github.com/langchain-ai/langchain/issues/1780 | 1,631,012,518 | 1,780 |
[
"hwchase17",
"langchain"
]
| https://github.com/hwchase17/langchain/blob/3701b2901e76f2f97239c2152a6a7d01754fb666/langchain/chains/question_answering/map_rerank_prompt.py#L6
This regex is not handling the case where if answer text contains something like: `\n Helpful Score: 100` and the parsing fails even if the score is 100.
It should be something like: `(.*?)\n((.?)*)Score: (.*)` to resolve the use case | Fix regex for map_rerank_prompt.py | https://api.github.com/repos/langchain-ai/langchain/issues/1779/comments | 2 | 2023-03-19T15:26:29Z | 2023-09-10T16:41:59Z | https://github.com/langchain-ai/langchain/issues/1779 | 1,630,987,862 | 1,779 |
[
"hwchase17",
"langchain"
]
| It would be great to see LangChain integrate with Standford's Alpaca 7B model, a fine-tuned LlaMa (see https://github.com/hwchase17/langchain/issues/1473).
Standford created an AI able to generate outputs that were largely on par with OpenAI’s `text-davinci-003` and regularly better than GPT-3 — all for a fraction of the computing power and price.
Stanford's Alpaca is a language model that was fine-tuned from Meta's LLaMA with 52,000 instructions generated by GPT-3.5[[1]](https://the-decoder.com/stanfords-alpaca-shows-that-openai-may-have-a-problem/). The researchers used AI-generated instructions to train Alpaca 7B, which exhibits many GPT-3.5-like behaviors[[1]](https://the-decoder.com/stanfords-alpaca-shows-that-openai-may-have-a-problem/). In a blind test using input from the Self-Instruct Evaluation Set, both models performed comparably[[1]](https://the-decoder.com/stanfords-alpaca-shows-that-openai-may-have-a-problem/)[[2]](https://github.com/tatsu-lab/stanford_alpaca). However, Alpaca has problems common to other language models such as hallucinations, toxicity, and stereotyping[[1]](https://the-decoder.com/stanfords-alpaca-shows-that-openai-may-have-a-problem/). The team is releasing an interactive demo, the training dataset, and the training code for research purposes[[1]](https://the-decoder.com/stanfords-alpaca-shows-that-openai-may-have-a-problem/)[[4]](https://crfm.stanford.edu/2023/03/13/alpaca.html).
Alpaca is still under development and has limitations that need to be addressed. The researchers have not yet fine-tuned the model to be safe and harmless[[2]](https://github.com/tatsu-lab/stanford_alpaca). They encourage users to be cautious when interacting with Alpaca and report any concerning behavior to help improve it[[2]](https://github.com/tatsu-lab/stanford_alpaca).
LLaMA is a new open-source language model from Meta Research that performs as well as closed-source models. Stanford's Alpaca is a fine-tuned version of LLaMA that can respond to instructions like ChatGPT[[3]](https://replicate.com/blog/replicate-alpaca). It functions more like a fancy version of autocomplete than a conversational bot[[3]](https://replicate.com/blog/replicate-alpaca).
The researchers are releasing their findings about an instruction-following language model dubbed Alpaca. They trained the Alpaca model on 52K instruction-following demonstrations generated in the style of self-instruct using `text-davinci-003`. On the self-instruct evaluation set, Alpaca shows many behaviors similar to OpenAI’s `text-davinci-003` but is also surprisingly small and easy/cheap to reproduce[[4]](https://crfm.stanford.edu/2023/03/13/alpaca.html). | Alpaca (Fine-tuned LLaMA) | https://api.github.com/repos/langchain-ai/langchain/issues/1777/comments | 23 | 2023-03-19T15:16:09Z | 2023-09-22T16:50:52Z | https://github.com/langchain-ai/langchain/issues/1777 | 1,630,984,304 | 1,777 |
[
"hwchase17",
"langchain"
]
| This issue may sound silly, I apologize for not being able to find the answer (and for my poor English).
For example, I want my chatbot only use paths from a list I give. I tried to include the following content in suffix:
```py
"""
omit...
You should only use the following file paths:
{path_list}
omit...
"""
```
But how can I add `path_list` to `input_variables`? I noticed that the `create_prompt` method has the following statement:
```py
if input_variables is None:
input_variables = ["input", "chat_history", "agent_scratchpad"]
```
So I tried adding `input_variables` in `agent_kwargs`:
```py
agent = initialize_agent(
tools,
llm,
agent="conversational-react-description",
verbose=True,
memory=memory,
return_intermediate_steps=True,
agent_kwargs={
'prefix': CHAT_BOT_PREFIX,
'format_instructions': CHAT_BOT_FORMAT_INSTRUCTIONS,
'suffix': CHAT_BOT_SUFFIX,
'input_variables': ["input", "chat_history", "agent_scratchpad", "path_list"], # here
},
)
```
This does create a prompt in the agent with a `path_list` variable in it. But how can I pass the value of path_list?
The way I thought was to pass it togethor with `input`.
```py
result = agent({"input": str_input, "path_list": path_list_str})
```
But this doesn't work. It raises the following error:
```text
ValueError: One input key expected got ['path_list', 'input']
```
I don't know if there's another way or if I'm missing something. | How to use self defined variable in an Agent? | https://api.github.com/repos/langchain-ai/langchain/issues/1774/comments | 5 | 2023-03-19T11:03:28Z | 2023-03-24T23:38:32Z | https://github.com/langchain-ai/langchain/issues/1774 | 1,630,889,769 | 1,774 |
[
"hwchase17",
"langchain"
]
| When using the chat application, I encountered an error message stating "openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens" when I asked a question like "Did he mention Stephen Breyer?".

| openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens | https://api.github.com/repos/langchain-ai/langchain/issues/1767/comments | 11 | 2023-03-19T01:53:30Z | 2023-11-01T16:08:00Z | https://github.com/langchain-ai/langchain/issues/1767 | 1,630,737,686 | 1,767 |
[
"hwchase17",
"langchain"
]
| It seems that #1578 adds support for SQLAlchemy v2 but the [poetry lock file](https://github.com/hwchase17/langchain/blob/8685d53adcdd0310e76349ecb4e2b87f980c4673/poetry.lock#L6211) is still at 1.4.46. | Update poetry lock to allow SQLAlchemy v2 | https://api.github.com/repos/langchain-ai/langchain/issues/1766/comments | 8 | 2023-03-19T01:48:23Z | 2023-04-25T04:10:57Z | https://github.com/langchain-ai/langchain/issues/1766 | 1,630,736,276 | 1,766 |
[
"hwchase17",
"langchain"
]
| If I use langchain for question answering on a single document with any LLM, is there a way to also extract the corresponding character offsets (begin and end) of the response in the original document?
Thanks,
Ravi. | Character offsets for the response/Answer | https://api.github.com/repos/langchain-ai/langchain/issues/1763/comments | 1 | 2023-03-18T21:44:36Z | 2023-08-24T16:14:26Z | https://github.com/langchain-ai/langchain/issues/1763 | 1,630,620,637 | 1,763 |
[
"hwchase17",
"langchain"
]
| ## Summary
Occurring in [Chat Langchain Demo](https://github.com/hwchase17/chat-langchain) when upgrading from 0.0.105 -> 0.0.106 getting
`ERROR:root:'OpenAIEmbeddings' object has no attribute 'max_retries'`
This occurs AFTER the user sends their first message, instantly receive back the error in the logs. The error is thrown from this try catch block. https://github.com/hwchase17/chat-langchain/blob/ba456378e04125ccbdc7715f5be17114df2ee2e1/main.py#L68
My suspicion is that it originates within the `get_chain` method, as the ChatResponse's are really just validation and don't touch Embeddings. Seems like may be deeper.
## Expected Behavior (v0.0.105):

## Actual Behavior (v0.0.106):

## Frequency:
Whenever creating a USER message in chat-langchain when upgrading past v0.0.105
## Environment:
[List the operating system, browser, device, and any other relevant software or hardware information]
https://github.com/hwchase17/chat-langchain
## Demo
https://user-images.githubusercontent.com/40816745/226128476-9d6dfae8-2d4f-438d-b9f7-c7daf39c8646.mp4 | ERROR:root:'OpenAIEmbeddings' object has no attribute 'max_retries' - VERSION 0.0.106 and up | https://api.github.com/repos/langchain-ai/langchain/issues/1759/comments | 2 | 2023-03-18T18:17:46Z | 2023-09-18T16:23:15Z | https://github.com/langchain-ai/langchain/issues/1759 | 1,630,492,439 | 1,759 |
[
"hwchase17",
"langchain"
]
| # Quick summary
Using the `namespace` argument in the function `Pinecone.from_existing_index` has no effect. Indeed, it is passed to `pinecone.Index`, which has no `namespace` argument.
# Steps to reproduce a relevant bug
```
import pinecone
from langchain.docstore.document import Document
from langchain.vectorstores.pinecone import Pinecone
from tests.integration_tests.vectorstores.fake_embeddings import FakeEmbeddings
index = pinecone.Index("langchain-demo") # this should be a new index
texts = ["foo", "bar", "baz"]
metadatas = [{"page": i} for i in range(len(texts))]
Pinecone.from_texts(
texts,
FakeEmbeddings(),
index_name="langchain-demo",
metadatas=metadatas,
namespace="test-namespace",
)
texts = ["foo2", "bar2", "baz2"]
metadatas = [{"page": i} for i in range(len(texts))]
Pinecone.from_texts(
texts,
FakeEmbeddings(),
index_name="langchain-demo",
metadatas=metadatas,
namespace="test-namespace2",
)
# Search with namespace
docsearch = Pinecone.from_existing_index("langchain-demo",
embedding=FakeEmbeddings(),
namespace="test-namespace")
output = docsearch.similarity_search("foo", k=6)
# check that we don't get results from the other namespace
page_contents = [o.page_content for o in output]
assert set(page_contents) == set(["foo", "bar", "baz"])
```
# Fix
The `namespace` argument used in `Pinecone.from_existing_index` and `Pinecone.from_texts` should be stored as an attribute and used by default by every method. | namespace argument not taken into account when creating Pinecone index | https://api.github.com/repos/langchain-ai/langchain/issues/1756/comments | 0 | 2023-03-18T12:26:39Z | 2023-03-19T02:55:40Z | https://github.com/langchain-ai/langchain/issues/1756 | 1,630,304,145 | 1,756 |
[
"hwchase17",
"langchain"
]
| Currently, the document on [Creating a custom prompt template](https://langchain.readthedocs.io/en/latest/modules/prompts/examples/custom_prompt_template.html) is outdated, as the code in the guide is no longer functional for the following reasons:
1. The document outlines creating a custom prompt template by inheriting from `BasePromptTemplate` with only the `format` method. However, a new required method `format_prompt` has been introduced as an interface for adapting chat-style prompt template usage.
2. Maybe, the `StringPromptTemplate` was created to absorb the change, but it is currently not exposed.
Therefore, I suggest using `StringPromptTemplate` instead of `BasePromptTemplate`, and exposing it in `langchain.prompts`.
I have created a [PR#1753](https://github.com/hwchase17/langchain/pull/1753) for this, and would appreciate it if you could review it.
Additionally, I have created another PR to slightly modify the class docstring for both `BasePromptTemplate` and `StringPromptTemplate`, as their current docstrings are outdated and require updating in relation to the issue at hand. | Document for a custom prompt template is outdated. | https://api.github.com/repos/langchain-ai/langchain/issues/1754/comments | 0 | 2023-03-18T10:40:32Z | 2023-03-19T23:51:51Z | https://github.com/langchain-ai/langchain/issues/1754 | 1,630,270,933 | 1,754 |
[
"hwchase17",
"langchain"
]
| I tried the code from the documentation here on a Jupyter notebook from VScode : https://langchain.readthedocs.io/en/latest/modules/llms/async_llm.html?highlight=agenerate#async-api-for-llm
It worked but when I replaced:
`from langchain.llms import OpenAI`
with
`
from langchain.chat_models import ChatOpenAI`
and
`
OpenAI(temperature=0.9)`
with
`
ChatOpenAI(temperature=0.9)`
I received an error "Got unknown type H" shown in the first error message below. The H is the first letter of the prompt and it changes when the prompt changes so I suppose somewhere the string is split.
I then tried to run it as a script changing the await outside of a function to asyncio.run( but I got the second error below.
--------------
# Error with a Jupyter VScode notebook using `await`
## Code used
```
import time
import asyncio
def generate_serially():
llm = ChatOpenAI(temperature=0)
for _ in range(10):
resp = llm.generate(["Hello, how are you?"])
print(resp.generations[0][0].text)
async def async_generate(llm):
resp = await llm.agenerate(["Hello, how are you?"])
print(resp.generations[0][0].text)
async def generate_concurrently():
llm = ChatOpenAI(temperature=0)
tasks = [async_generate(llm) for _ in range(10)]
await asyncio.gather(*tasks)
s = time.perf_counter()
# # If running this outside of Jupyter, use asyncio.run(generate_concurrently())
await generate_concurrently()
elapsed = time.perf_counter() - s
print("\n", f"Concurrent executed in {elapsed:0.2f} seconds.")
s = time.perf_counter()
generate_serially()
elapsed = time.perf_counter() - s
print("\n",f"Serial executed in {elapsed:0.2f} seconds.")
```
## Error obtained
```
ValueError Traceback (most recent call last)
/home/username/projects/Latex_Doc_Search.ipynb Cell 35 in ()
----> 1 await generate_concurrently()
/home/username/projects/Latex_Doc_Search.ipynb Cell 35 in generate_concurrently()
17 llm = ChatOpenAI(temperature=0)
18 tasks = [async_generate(llm) for _ in range(10)]
---> 19 await asyncio.gather(*tasks)
/home/username/projects/Latex_Doc_Search.ipynb Cell 35 in async_generate(llm)
11 async def async_generate(llm):
---> 12 resp = await llm.agenerate(["Hello, how are you?"])
13 print(resp.generations[0][0].text)
File ~/anaconda3/lib/python3.9/site-packages/langchain/chat_models/base.py:57, in BaseChatModel.agenerate(self, messages, stop)
53 async def agenerate(
54 self, messages: List[List[BaseMessage]], stop: Optional[List[str]] = None
55 ) -> LLMResult:
56 """Top Level call"""
---> 57 results = [await self._agenerate(m, stop=stop) for m in messages]
58 return LLMResult(generations=[res.generations for res in results])
File ~/anaconda3/lib/python3.9/site-packages/langchain/chat_models/base.py:57, in (.0)
53 async def agenerate(
...
---> 88 raise ValueError(f"Got unknown type {message}")
89 if "name" in message.additional_kwargs:
90 message_dict["name"] = message.additional_kwargs["name"]
ValueError: Got unknown type H
```
------------------
--------------------
# Error with a script using asyncio.run(
```
ValueError Traceback (most recent call last)
[/home/username/projects/Latex_Doc_Search.ipynb](https://file+.vscode-resource.vscode-cdn.net/home/username/projects/Latex_Doc_Search.ipynb) Cell 39 in ()
[1](vscode-notebook-cell:/home/username/projects/Latex_Doc_Search.ipynb#Y150sZmlsZQ%3D%3D?line=0) import nest_asyncio
[2](vscode-notebook-cell:/home//projects/Latex_Doc_Search.ipynb#Y150sZmlsZQ%3D%3D?line=1) nest_asyncio.apply()
----> [4](vscode-notebook-cell:/home/username/projects/Latex_Doc_Search.ipynb#Y150sZmlsZQ%3D%3D?line=3) await generate_concurrently()
[/home/username/projects/Latex_Doc_Search.ipynb](https://file+.vscode-resource.vscode-cdn.net/home/username/projects/Latex_Doc_Search.ipynb) Cell 39 in generate_concurrently()
[17](vscode-notebook-cell:/home/username/projects/Latex_Doc_Search.ipynb#Y150sZmlsZQ%3D%3D?line=16) llm = ChatOpenAI(temperature=0)
[18](vscode-notebook-cell:/home/username/projects/Latex_Doc_Search.ipynb#Y150sZmlsZQ%3D%3D?line=17) tasks = [async_generate(llm) for _ in range(10)]
---> [19](vscode-notebook-cell:/home/username/projects/Latex_Doc_Search.ipynb#Y150sZmlsZQ%3D%3D?line=18) await asyncio.gather(*tasks)
File [~/anaconda3/lib/python3.9/asyncio/tasks.py:328](https://file+.vscode-resource.vscode-cdn.net/home/username/projects/~/anaconda3/lib/python3.9/asyncio/tasks.py:328), in Task.__wakeup(self, future)
326 def __wakeup(self, future):
327 try:
--> 328 future.result()
329 except BaseException as exc:
330 # This may also be a cancellation.
331 self.__step(exc)
File [~/anaconda3/lib/python3.9/asyncio/tasks.py:256](https://file+.vscode-resource.vscode-cdn.net/home/username/projects/~/anaconda3/lib/python3.9/asyncio/tasks.py:256), in Task.__step(***failed resolving arguments***)
252 try:
253 if exc is None:
254 # We use the `send` method directly, because coroutines
255 # don't have `__iter__` and `__next__` methods.
...
---> 88 raise ValueError(f"Got unknown type {message}")
89 if "name" in message.additional_kwargs:
90 message_dict["name"] = message.additional_kwargs["name"]
ValueError: Got unknown type H
``` | error when using an asynchronous await with ChatGPT | https://api.github.com/repos/langchain-ai/langchain/issues/1751/comments | 6 | 2023-03-18T08:13:50Z | 2023-11-27T21:59:57Z | https://github.com/langchain-ai/langchain/issues/1751 | 1,630,226,364 | 1,751 |
[
"hwchase17",
"langchain"
]
| AzureOpenAI seems not work for gpt-3.5-turbo due to this issue | AzureOpenAI gpt-3.5-turbo doesn't support best_of parameter | https://api.github.com/repos/langchain-ai/langchain/issues/1747/comments | 6 | 2023-03-18T00:45:00Z | 2023-09-27T16:12:35Z | https://github.com/langchain-ai/langchain/issues/1747 | 1,630,060,590 | 1,747 |
[
"hwchase17",
"langchain"
]
| I'm trying to follow this example [https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/chat_vector_db.html?highlight=chatvectordb#chat-vector-db-with-streaming-to-stdout](url)
and I've used PagedPDFSplitter to load a PDF.
This is how I've done it (build_vectorstore returns Chroma.from_documents(texts,embeddings))
```
documents = load_documents()
texts= split_text(documents)
vectorstore = build_vectorstore(texts)
llm = ChatOpenAI(temperature=0)
streaming_llm = ChatOpenAI(streaming=True,callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),verbose=True,temperature=0)
question_generator = LLMChain(llm=llm,prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_chain(streaming_llm,chain_type="stuff",prompt=QA_PROMPT)
qa = ChatVectorDBChain(vectorstore=vectorstore,combine_docs_chain=doc_chain,question_generator=question_generator)
chat_history = []
query = "What does this document contain?"
result = qa({"question": query, "chat_history": chat_history})
```
And this is the error I'm getting:
KeyError: {'context', 'question'}
```
[91] result = qa({"question": query, "chat_history": chat_history})
[114](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=113) except (KeyboardInterrupt, Exception) as e:
[115](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=114) self.callback_manager.on_chain_error(e, verbose=self.verbose)
--> [116](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=115) raise e
[117](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=116) self.callback_manager.on_chain_end(outputs, verbose=self.verbose)
[118](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=117) return self.prep_outputs(inputs, outputs, return_only_outputs)
[107](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=106) self.callback_manager.on_chain_start(
[108](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=107) {"name": self.__class__.__name__},
[109](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=108) inputs,
[110](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=109) verbose=self.verbose,
[111](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=110) )
[112](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=111) try:
--> [113](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=112) outputs = self._call(inputs)
[114](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=113) except (KeyboardInterrupt, Exception) as e:
[115](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/chains/base.py?line=114) self.callback_manager.on_chain_error(e, verbose=self.verbose)
...
[16](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/formatting.py?line=15) extra = set(kwargs).difference(used_args)
[17](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/formatting.py?line=16) if extra:
---> [18](file:///c%3A/Users//miniconda3/envs/langchain2/lib/site-packages/langchain/formatting.py?line=17) raise KeyError(extra)
KeyError: {'context', 'question'}
``` | Getting KeyError: {'context', 'question'} when following Chat Vector DB with streaming example | https://api.github.com/repos/langchain-ai/langchain/issues/1745/comments | 1 | 2023-03-17T22:09:27Z | 2023-09-10T16:42:10Z | https://github.com/langchain-ai/langchain/issues/1745 | 1,629,974,694 | 1,745 |
[
"hwchase17",
"langchain"
]
| I have found that the OpenAI embeddings are decent, but suffer when you want specific names or keywords to be included. I've found that the [sparse-dense approach](https://www.pinecone.io/learn/sparse-dense) produces better results, but not supported by the current implementation of the Vectorstore or the chains.
Has anyone else implemented a workaround or is there any planned support for this? | Support for Pinecone Hybrid Search (Sparse-dense embeddings) | https://api.github.com/repos/langchain-ai/langchain/issues/1743/comments | 7 | 2023-03-17T20:02:14Z | 2023-09-25T16:16:15Z | https://github.com/langchain-ai/langchain/issues/1743 | 1,629,872,044 | 1,743 |
[
"hwchase17",
"langchain"
]
| It's really useful to move away the prompts etc. from the main codebase. Currently, from the documentation and my own testing, seems that only those chains can be serialized that have OpenAI LLM (only `text-davinci-003`). But, no such support is available for Azure-based OpenAI LLMs (`text-davinci-003` and `gpt-3.5-turbo`).
Is my understanding correct? If yes, any plans on adding it soon?
| Chain Serialization Support with Azure OpenAI LLMs | https://api.github.com/repos/langchain-ai/langchain/issues/1736/comments | 2 | 2023-03-17T12:50:42Z | 2023-10-12T16:11:09Z | https://github.com/langchain-ai/langchain/issues/1736 | 1,629,261,920 | 1,736 |
[
"hwchase17",
"langchain"
]
| https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/vectorstores/milvus.py#L319
I'm using milvus where for my question I'm getting 0 documents and so index out of range error occurs
Error line:
https://github.com/hwchase17/langchain/blob/276940fd9babf8aec570dd869cc84fbca1c766bf/langchain/chains/llm.py#L95 | list index out of range error if similarity search gives 0 docs | https://api.github.com/repos/langchain-ai/langchain/issues/1733/comments | 7 | 2023-03-17T11:14:48Z | 2023-08-11T05:50:41Z | https://github.com/langchain-ai/langchain/issues/1733 | 1,629,133,888 | 1,733 |
[
"hwchase17",
"langchain"
]
| As most scientific papers are released not only as pdf, but also also source code (LaTeX), I propose adding a LaTeX Text Splitter.
I shall split in hierarchical order (e.g. by sections first, then subsections, headings, ...) | LaTeX Text Splitter | https://api.github.com/repos/langchain-ai/langchain/issues/1731/comments | 1 | 2023-03-17T09:51:15Z | 2023-03-18T02:39:19Z | https://github.com/langchain-ai/langchain/issues/1731 | 1,629,015,269 | 1,731 |
[
"hwchase17",
"langchain"
]
| Please include ChatGPT Turbo's ChatOpenAI object to be passed as an LLM completion in the Summarizer chain.
_(from langchain.chat_models.openai import ChatOpenAI)_
for the following usage
`summary_chain = load_summarize_chain(llm, chain_type="map_reduce")
summarize_document_chain = AnalyzeDocumentChain(combine_docs_chain=summary_chain)` | Include ChatGPT Turbo model in the Chain (Summarization) | https://api.github.com/repos/langchain-ai/langchain/issues/1730/comments | 5 | 2023-03-17T09:21:12Z | 2023-09-28T16:11:08Z | https://github.com/langchain-ai/langchain/issues/1730 | 1,628,973,092 | 1,730 |
[
"hwchase17",
"langchain"
]
| Is there anyway I can use my existing vector store that I already have on pinecone? I don't want to keep generating the embeddings because my documents have a lot of text in them.
I also want to link pinecone and the serpAPI, so that if the answer is not available in pinecone, I can browse the web for answers.
Is this something doable? Any resources would be really helpful. | Unable to add vector store and SerpAPI to agent | https://api.github.com/repos/langchain-ai/langchain/issues/1729/comments | 1 | 2023-03-17T07:42:56Z | 2023-08-24T16:14:35Z | https://github.com/langchain-ai/langchain/issues/1729 | 1,628,849,552 | 1,729 |
[
"hwchase17",
"langchain"
]
| ```
import os, pdb
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter, TokenTextSplitter
from langchain.vectorstores import Milvus
from langchain.document_loaders import TextLoader
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.chains.question_answering import load_qa_chain
from langchain.llms import OpenAI, OpenAIChat
os.environ["TOKENIZERS_PARALLELISM"] = "false"
f_path = "doc.txt"
loader = TextLoader(f_path)
documents = loader.load()
text_splitter = TokenTextSplitter(chunk_size=512, chunk_overlap=20)
docs = text_splitter.split_documents(documents)
embeddings = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
vector_db = Milvus.from_documents(
documents,
embeddings,
connection_args={"host": "localhost", "port": "19530"},
)
query = "Who is the author of this study?"
docs = vector_db.similarity_search(query, 2)
chain = load_qa_chain(
OpenAIChat(model_name="gpt-3.5-turbo", temperature=0.1), chain_type="refine"
)
print(chain.run(input_documents=docs, question=query))
```
Gives
`ValueError: Missing some input keys: {'existing_answer'}`
Not sure what's going wrong. I suppose for `refine` chain the first call should not be looking for `existing_answer`, right? | Refine Chain Error | https://api.github.com/repos/langchain-ai/langchain/issues/1724/comments | 5 | 2023-03-17T02:43:06Z | 2023-11-01T16:08:05Z | https://github.com/langchain-ai/langchain/issues/1724 | 1,628,593,372 | 1,724 |
[
"hwchase17",
"langchain"
]
| the example notebook at https://github.com/hwchase17/langchain/blob/master/docs/modules/agents/agent_toolkits/sql_database.ipynb is broken at the following line.
``` python
agent_executor = create_sql_agent(
llm=OpenAI(temperature=0),
toolkit=toolkit,
verbose=True
)
```
It produces a `KeyError: 'tools'`
This error is related to another error`TypeError("cannot pickle 'module' object")`. It appears that during the Pydantic validation process for any of the SQLDataBaseTools (e.g., QuerySQLDataBaseTool), there is an attempt to deep copy the `db` attribute, which is a SQLDatabase object. The method used is `rv = reductor(4)`. However, that is followed up with a call to `_reconstruct(x, memo, *rv)`. That call will throw an error for creating the new SQLDataBase object because the required `engine` attribute is missing.
A workaround is to define a `__deepcopy__()` method in SQLDatabase. If the deep copy is not necessary, the method may simply return `self` or do a shallow copy. But need to make sure what is the right approach here. | example sql_database toolkit agent notebook throws "KeyError" in create_sql_agent | https://api.github.com/repos/langchain-ai/langchain/issues/1721/comments | 4 | 2023-03-17T01:16:14Z | 2023-04-12T12:41:52Z | https://github.com/langchain-ai/langchain/issues/1721 | 1,628,532,635 | 1,721 |
[
"hwchase17",
"langchain"
]
| https://github.com/hwchase17/langchain/blob/master/langchain/agents/agent_toolkits/sql/toolkit.py#L38
SqlDatabaseToolkit should have custom llm_chain field for QueryCheckerTool. This is causing issues when OpenAI is not available, as the QueryCheckerTool will automatically use OpenAI. | SqlDatabaseToolkit should have custom llm_chain for QueryCheckerTool | https://api.github.com/repos/langchain-ai/langchain/issues/1719/comments | 2 | 2023-03-17T00:12:47Z | 2023-09-18T16:23:20Z | https://github.com/langchain-ai/langchain/issues/1719 | 1,628,492,807 | 1,719 |
[
"hwchase17",
"langchain"
]
| I noticed there is no support for stop sequences in the langchain API,
Is this some deliberate choice, or should I make a PR to add support for it? | No stop sequneces supported for OpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/1717/comments | 6 | 2023-03-16T20:05:04Z | 2023-09-28T16:11:13Z | https://github.com/langchain-ai/langchain/issues/1717 | 1,628,182,486 | 1,717 |
[
"hwchase17",
"langchain"
]
| Hello, I've noticed that after the latest commit of @MthwRobinson there are two different modules to load Word documents, could they be unified in a single version? Also there are two notebooks that do almost the same thing.
[docx.py](https://github.com/hwchase17/langchain/blob/3c2468452284ee37b8a88a20b864255fa4385b65/langchain/document_loaders/docx.py) and [word_document.py](https://github.com/hwchase17/langchain/blob/3c2468452284ee37b8a88a20b864255fa4385b65/langchain/document_loaders/word_document.py)
[microsoft_word.ipynb](https://github.com/hwchase17/langchain/blob/3c2468452284ee37b8a88a20b864255fa4385b65/docs/modules/document_loaders/examples/microsoft_word.ipynb) and [word_document.ipynb](https://github.com/hwchase17/langchain/blob/3c2468452284ee37b8a88a20b864255fa4385b65/docs/modules/document_loaders/examples/word_document.ipynb)
Or am I just missing something?
| Two different document loaders for Microsoft Word files | https://api.github.com/repos/langchain-ai/langchain/issues/1716/comments | 5 | 2023-03-16T18:57:09Z | 2023-05-17T16:18:06Z | https://github.com/langchain-ai/langchain/issues/1716 | 1,628,093,530 | 1,716 |
[
"hwchase17",
"langchain"
]
| I was going through [Vectorstore Agent](https://langchain.readthedocs.io/en/latest/modules/agents/agent_toolkits/vectorstore.html?highlight=vectorstore%20agent#vectorstore-agent) tutorial and I am facing issues with the `VectorStoreQAWithSourcesTool`.
Looking closely at the code https://github.com/hwchase17/langchain/blob/3c2468452284ee37b8a88a20b864255fa4385b65/langchain/chains/qa_with_sources/base.py#L119-L120
it appears, the parsing rule might be too strict for extract the list of sources. Often, when the agent is fetching information from the vectorstore, the `VectorStoreQAWithSourcesTool` output is something like `....SOURCES:\n<source1>\n<source2>...` instead of `...SOURCES: <source1>,<source2>...`.
Due to this, the `VectorStoreQAWithSourcesTool` output is broken and the agent response is impacted.
P.S. I used `Chroma` as the vectorstore db and `OpenAI(temperature=0)` as the LLM. | bug(QA with Sources): source parsing is not reliable | https://api.github.com/repos/langchain-ai/langchain/issues/1712/comments | 3 | 2023-03-16T15:47:53Z | 2023-03-31T15:41:59Z | https://github.com/langchain-ai/langchain/issues/1712 | 1,627,779,986 | 1,712 |
[
"hwchase17",
"langchain"
]
| I have been performing tests on the following dataset (https://www.kaggle.com/datasets/peopledatalabssf/free-7-million-company-dataset) for a couple of days now. Yesterday and earlier agent had no problems with answering questions like:
- "list all companies starting with 'a'"
- "what is the company earliest founded and in which year?"
Now it takes non-existing columns for analysis or completely misunderstands the questions. Any ideas why such a performance drop happen? Is OpenAI changing the LLMs on their side? | performance of CSVAgent dropped significantly | https://api.github.com/repos/langchain-ai/langchain/issues/1710/comments | 1 | 2023-03-16T13:40:36Z | 2023-09-10T16:42:15Z | https://github.com/langchain-ai/langchain/issues/1710 | 1,627,497,863 | 1,710 |
[
"hwchase17",
"langchain"
]
| I've just implemented the AsyncCallbackManager and handler, and everything works fine except for the fact I receive this warning
> /usr/local/lib/python3.10/site-packages/langchain/agents/agent.py:456: RuntimeWarning: coroutine 'AsyncCallbackManager.on_agent_action' was never awaited
> self.callback_manager.on_agent_action(
> RuntimeWarning: Enable tracemalloc to get the object allocation traceback
Which seems to prevent the logging I'm trying to implement with the handler.
Relevant code snippet:
https://github.com/hwchase17/langchain/blob/3c2468452284ee37b8a88a20b864255fa4385b65/langchain/agents/agent.py#L456-L458
Looking at the codebase, it looks like the other callback manager calls are awaited, but not sure if I'm using the wrong executor?
https://github.com/hwchase17/langchain/blob/3c2468452284ee37b8a88a20b864255fa4385b65/langchain/agents/agent.py#L384-L387
| on_agent_action was never awaited | https://api.github.com/repos/langchain-ai/langchain/issues/1708/comments | 6 | 2023-03-16T11:50:17Z | 2023-03-20T18:33:18Z | https://github.com/langchain-ai/langchain/issues/1708 | 1,627,311,222 | 1,708 |
[
"hwchase17",
"langchain"
]
| When using `ConversationSummaryMemory` or `ConversationSummaryBufferMemory`, I sometimes want to see the details of the summarization. I would like to add a parameter for displaying this information.
This feature can be easily added by introducing a `verbose` parameter to the `SummarizerMixin` class and setting its default value to `False`, then pass to `LLMChain`. When setting up the memory, you can simply specify `verbose` like this:
```
memory = ConversationSummaryMemory(
llm=llm_summarization,
verbose=True,
)
```
I created this issue to determine if this feature is necessary. If it is needed, I can add the feature myself and submit a Pull Request. | Displaying details of summarization in `ConversationSummaryMemory` and `ConversationSummaryBufferMemory` | https://api.github.com/repos/langchain-ai/langchain/issues/1705/comments | 1 | 2023-03-16T06:55:24Z | 2023-09-10T16:42:20Z | https://github.com/langchain-ai/langchain/issues/1705 | 1,626,834,563 | 1,705 |
[
"hwchase17",
"langchain"
]
| Looking at the [tracing](https://langchain.readthedocs.io/en/latest/tracing.html) docs, would be great if I could point the output at my own tracing backend using a portable/open-standard format.
As the tracing features are built out, would be amazing if an output option was just [OLTP using the Python SDK](https://opentelemetry.io/docs/instrumentation/python/), which is very well supported by a number of different tools. | Support OpenTelemetry for Tracing | https://api.github.com/repos/langchain-ai/langchain/issues/1704/comments | 2 | 2023-03-16T04:03:41Z | 2023-09-18T16:25:28Z | https://github.com/langchain-ai/langchain/issues/1704 | 1,626,671,239 | 1,704 |
[
"hwchase17",
"langchain"
]
| I ask you to test mrkl agent prompt on different LLM. In my experience no LLM other than ones by OpenAI can handle mrkl prompt. I am but one man and make mistakes. I want someone to double check this claim. Bloom came close but it even it can't handle it. They all can't format response up to the standard. Langchain starts to throw parsing errors. | Testing ZeroShotAgent using different LLM other than OpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/1703/comments | 1 | 2023-03-16T03:51:09Z | 2023-08-24T16:14:46Z | https://github.com/langchain-ai/langchain/issues/1703 | 1,626,660,999 | 1,703 |
[
"hwchase17",
"langchain"
]
| Currently it takes 10-15s to get response from OpenAI
I am using example similar to this https://langchain.readthedocs.io/en/latest/modules/agents/examples/agent_vectorstore.html
| Are there any ways to increase response speed? | https://api.github.com/repos/langchain-ai/langchain/issues/1702/comments | 22 | 2023-03-15T21:59:35Z | 2024-05-13T16:07:15Z | https://github.com/langchain-ai/langchain/issues/1702 | 1,626,345,845 | 1,702 |
[
"hwchase17",
"langchain"
]
| This won't work, because your class needs runhouse. I would like to use it on my GPU Server so ist should not log in via ssh to it's own:
```
embeddings = SelfHostedEmbeddings(
model_load_fn=get_pipeline,
hardware="cuda",
model_reqs=["./", "torch", "transformers"],
inference_fn=inference_fn
)
```
So here cuda would not work... | No local GPU for self hosted embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/1695/comments | 3 | 2023-03-15T16:41:37Z | 2023-10-08T16:08:12Z | https://github.com/langchain-ai/langchain/issues/1695 | 1,625,882,506 | 1,695 |
[
"hwchase17",
"langchain"
]
| Sorry if this is a dumb questions but If a have a txt file with sentences separated by newlines how do i split by newlines to generate embeddings for each sentence | Quick question about splitter | https://api.github.com/repos/langchain-ai/langchain/issues/1694/comments | 2 | 2023-03-15T15:31:56Z | 2023-03-15T17:30:11Z | https://github.com/langchain-ai/langchain/issues/1694 | 1,625,759,946 | 1,694 |
[
"hwchase17",
"langchain"
]
| I am using this example https://langchain.readthedocs.io/en/latest/modules/chat/examples/agent.html with my data
If I am not using Chat (ChatOpenAI) it works without issue
So probably some issue how you handling output
```
Entering new AgentExecutor chain...
DEBUG:Chroma:time to pre process our knn query: 1.430511474609375e-06
DEBUG:Chroma:time to run knn query: 0.0008881092071533203
Thought: I am not sure if I was created by AA or not. I need to use the QA System to find out.
Action: QA System
Action Input: .....
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-52-e127e5158c61>](https://localhost:8080/#) in <module>
----> 1 agent_executor.run("Are you create by OpenAI?")
8 frames
[/usr/local/lib/python3.9/dist-packages/langchain/agents/mrkl/base.py](https://localhost:8080/#) in get_action_and_input(llm_output)
44 match = re.search(regex, llm_output, re.DOTALL)
45 if not match:
---> 46 raise ValueError(f"Could not parse LLM output: `{llm_output}`")
47 action = match.group(1).strip()
48 action_input = match.group(2)
ValueError: Could not parse LLM output: `Based on the information available to me
``` | Bug with parsing | https://api.github.com/repos/langchain-ai/langchain/issues/1688/comments | 2 | 2023-03-15T11:32:57Z | 2023-09-10T16:42:25Z | https://github.com/langchain-ai/langchain/issues/1688 | 1,625,343,754 | 1,688 |
[
"hwchase17",
"langchain"
]
| With the rise of multi-modal models (GPT-4 announced today), and how popular LangChain is among the research community, we should be ready for new modalities. Ideally, the user should be able to do something like:
```python
vllm = OpenAI(model_name='gpt-4', max_tokens=through_the_roof)
prompt = PromptTemplate(
input_variables=["user_text", "image", "prompt"],
template="{prompt} {image} Question: {user_text}")
vl_chain = VLLMChain(vllm=vllm, prompt=prompt, verbose=True)
vl_chain.predict(prompt=prompt, image=pil_image, user_text=user_text)
```
If no one is working on this currently, I'm willing to try to add this functionality in my spare time.
The uncertain part is, we don't know if GPT-4 is going to be like Flamingo (allowing you to put multiple images in a specific position and order the document) or like BLIP-2 (cross-attention between image and text, but in no particular order). My educated guess is the former. | [Feature suggestion] Multi-modal Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/1677/comments | 1 | 2023-03-15T01:40:31Z | 2023-03-17T18:00:16Z | https://github.com/langchain-ai/langchain/issues/1677 | 1,624,571,606 | 1,677 |
[
"hwchase17",
"langchain"
]
| I was attempting to split this volume:
https://terrorgum.com/tfox/books/introductionto3dgameprogrammingwithdirectx12.pdf
using
text_splitter = RecursiveCharacterTextSplitter(
chunk_size = 1000
chunk_overlap = 10,
length_function = len,
separators="\n\n"
)
On page 289, it enters an infinite recursive loop where it only has one split and no seperators in the split. It then keeps recursively calling self.split_text(s) until it errors out.
modifying to a chunk_size of 2000 with a chunk overlap of 100 makes this go away, but it seems there should be some detection of infinite recursion happening, as the error result wasn't immediately obvious. | RecursiveCharacterTextSplitter.split_text can enter infinite recursive loop | https://api.github.com/repos/langchain-ai/langchain/issues/1663/comments | 7 | 2023-03-14T17:29:33Z | 2023-11-03T16:08:27Z | https://github.com/langchain-ai/langchain/issues/1663 | 1,623,978,467 | 1,663 |
[
"hwchase17",
"langchain"
]
| Google has just announced the PaLM API [https://developers.googleblog.com/2023/03/announcing-palm-api-and-makersuite.html](https://developers.googleblog.com/2023/03/announcing-palm-api-and-makersuite.html)!!
| Google Cloud PaLM API integration | https://api.github.com/repos/langchain-ai/langchain/issues/1661/comments | 3 | 2023-03-14T14:38:18Z | 2023-09-28T16:11:18Z | https://github.com/langchain-ai/langchain/issues/1661 | 1,623,618,445 | 1,661 |
[
"hwchase17",
"langchain"
]
| Hi. I am trying to generate a customized prompt where I need to escape the curly-braces. Whenever I use curly braces in my prompt, it is being taken as place holders.
So, can you please suggest how to escape the curly braces in the prompts? | How to escape curly-braces "{}" in a customized prompt? | https://api.github.com/repos/langchain-ai/langchain/issues/1660/comments | 9 | 2023-03-14T12:59:42Z | 2024-07-28T09:30:47Z | https://github.com/langchain-ai/langchain/issues/1660 | 1,623,430,725 | 1,660 |
[
"hwchase17",
"langchain"
]
| Now Microsoft have released gpt-35-turbo, please can AzureOpenAI be added to chat_models.
Thanks | Add AzureOpenAI for chat_models | https://api.github.com/repos/langchain-ai/langchain/issues/1659/comments | 1 | 2023-03-14T12:03:16Z | 2023-03-20T12:41:19Z | https://github.com/langchain-ai/langchain/issues/1659 | 1,623,336,280 | 1,659 |
[
"hwchase17",
"langchain"
]
| I am having trouble using langchain with llama-index (gpt-index). I don't understand what is happening on the langchain side.
When I use OpenAIChat as LLM then sometimes with some user queries I get this error:
```
raise ValueError(f"Could not parse LLM output: `{llm_output}`")
ValueError: Could not parse LLM output: `Thought: Do I need to use a tool? No
```
And to make it worse when I switch to the OpenAI LLM then the agent almost never decides to use the tool.
I am okay with either solution but I just can't seem to fix it. What is happening?
My code:
```
from langchain.agents import ConversationalAgent, Tool, AgentExecutor
from langchain import OpenAI, LLMChain
from langchain.llms import OpenAIChat
TOOLS = [
Tool(
name = "GPT Index",
func=lambda q: str(INDEX.query(q, llm_predictor=LLM_PREDICTOR, text_qa_template=QA_PROMPT, similarity_top_k=5, response_mode="compact")),
description="useful for when you need to answer questions about weddings or marriage.",
return_direct=True
),
]
LLM=OpenAIChat(temperature=0)
prefix = """Assistant is a large language model trained by OpenAI.
Assistant is designed to support a wide range of tasks, from answering simple questions to providing detailed explanations and discussions on a wide range of topics. As a language model, Assistant can generate human-like text based on input received, and can provide natural-sounding conversation or consistent, on-topic responses.
Assistant is constantly learning and improving, and its capabilities are always evolving. It can process vast amounts of text to understand and provide accurate and helpful answers to a variety of questions. Additionally, Assistant can generate its own text based on received input, allowing it to participate in discussions on a variety of topics, or provide explanations and commentary.
Overall, Assistant is a powerful tool that can support a variety of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or want to have a conversation about a specific topic, Assistant is here to help.
TOOLS:
------
Assistant has access to the following tools."""
suffix = """Answer the questions you know to the best of your knowledge.
Begin!
User Input: {input}
{agent_scratchpad}"""
prompt = ConversationalAgent.create_prompt(
TOOLS,
prefix=prefix,
suffix=suffix,
input_variables=["input", "agent_scratchpad"]
)
llm_chain = LLMChain(llm=LLM, prompt=prompt)
agent = ConversationalAgent(llm_chain=llm_chain)
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=TOOLS, verbose=True)
response = agent_executor.run(user_message)
```
| ValueError: Could not parse LLM output | https://api.github.com/repos/langchain-ai/langchain/issues/1657/comments | 13 | 2023-03-14T09:01:11Z | 2024-01-30T00:52:57Z | https://github.com/langchain-ai/langchain/issues/1657 | 1,623,023,032 | 1,657 |
[
"hwchase17",
"langchain"
]
| I am trying to follow the example from this [URL](https://langchain.readthedocs.io/en/latest/modules/indexes/chain_examples/question_answering.html), but I am getting the above error. What might be wrong?
## Environment
OS: Windows 11
Python: 3.9
| AttributeError: 'VectorStoreIndexWrapper' object has no attribute 'similarity_search' | https://api.github.com/repos/langchain-ai/langchain/issues/1655/comments | 5 | 2023-03-14T07:04:19Z | 2023-09-29T16:09:57Z | https://github.com/langchain-ai/langchain/issues/1655 | 1,622,855,759 | 1,655 |
[
"hwchase17",
"langchain"
]
| With the newly released ChatOpenAI model, the completion output is being cut off randomly in between
For example I used the below input
Write me an essay on Pune
I got this output
Pune, also known as Poona, is a city located in the western Indian state of Maharashtra. It is the second-largest city in the state and is often referred to as the "Oxford of the East" due to its reputation as a center of education and research. Pune is a vibrant city with a rich history, diverse culture, and a thriving economy.\n\nThe history of Pune dates back to the 8th century when it was founded by the Rashtrakuta dynasty. Over the centuries, it has been ruled by various dynasties, including the Marathas, the Peshwas, and the British. Pune played a significant role in India\'s struggle for independence, and many freedom fighters, including Mahatma Gandhi, spent time in the city.\n\nToday, Pune is a bustling metropolis with a population of over 3 million people. It is home to some of the most prestigious educational institutions in India, including the University of Pune, the Indian Institute of Science Education and Research, and the National Defense Academy. The city is also a hub for research and development, with many multinational companies setting up their research centers in Pune.\n\nPune is a city of contrasts, with modern skyscrapers standing alongside ancient temples and historical landmarks. The city\'s
As you can see the message is cutoff in between. I followed the official documentation from here https://github.com/hwchase17/langchain/blob/master/docs/modules/chat/getting_started.ipynb
This was not the issue before with OpenAIChat but with ChatOpenAI this is posing an issue | Output cutoff with ChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/1652/comments | 6 | 2023-03-14T05:37:02Z | 2023-07-07T08:03:54Z | https://github.com/langchain-ai/langchain/issues/1652 | 1,622,767,469 | 1,652 |
[
"hwchase17",
"langchain"
]
| SQL Database Agent can't adapt to the turbo model,there is a bug in get_action_and_input function.
this is the code of get_action_and_input for current version:
```python
# file:langchain/agents/mrkl/base.py
def get_action_and_input(llm_output: str) -> Tuple[str, str]:
"""Parse out the action and input from the LLM output.
Note: if you're specifying a custom prompt for the ZeroShotAgent,
you will need to ensure that it meets the following Regex requirements.
The string starting with "Action:" and the following string starting
with "Action Input:" should be separated by a newline.
"""
if FINAL_ANSWER_ACTION in llm_output:
return "Final Answer", llm_output.split(FINAL_ANSWER_ACTION)[-1].strip()
regex = r"Action: (.*?)[\n]*Action Input: (.*)"
match = re.search(regex, llm_output, re.DOTALL)
if not match:
raise ValueError(f"Could not parse LLM output: `{llm_output}`")
action = match.group(1).strip()
action_input = match.group(2)
return action, action_input.strip(" ").strip('"')
```
when model is text-davinci-003 ,it works well. but when the model is gpt-3.5-turbo,there will be another '\n' at the end of llm_out,which lead to a wrong action_input result.
and if add the follow code to the function, it works well again.
this is my code:
```python
# file:langchain/agents/mrkl/base.py
def get_action_and_input(llm_output: str) -> Tuple[str, str]:
"""Parse out the action and input from the LLM output.
Note: if you're specifying a custom prompt for the ZeroShotAgent,
you will need to ensure that it meets the following Regex requirements.
The string starting with "Action:" and the following string starting
with "Action Input:" should be separated by a newline.
"""
# this is what i added
llm_output = llm_output.rstrip("\n")
if FINAL_ANSWER_ACTION in llm_output:
return "Final Answer", llm_output.split(FINAL_ANSWER_ACTION)[-1].strip()
regex = r"Action: (.*?)[\n]*Action Input: (.*)"
match = re.search(regex, llm_output, re.DOTALL)
if not match:
raise ValueError(f"Could not parse LLM output: `{llm_output}`")
action = match.group(1).strip()
action_input = match.group(2)
return action, action_input.strip(" ").strip('"')
```
> I am using langchain version 0.0.108 | SQL Database Agent can't adapt to the turbo model,there is a bug in get_action_and_input function | https://api.github.com/repos/langchain-ai/langchain/issues/1649/comments | 3 | 2023-03-14T02:42:12Z | 2023-05-12T05:00:41Z | https://github.com/langchain-ai/langchain/issues/1649 | 1,622,625,055 | 1,649 |
[
"hwchase17",
"langchain"
]
| The conversational agent at /langchain/agents/conversational/base.py looks to have a regex that isn't good for pulling out multiline Action Inputs, whereas the
The mrkl agent at /langchain/agents/mrkl/base.py has a good regex for pulling out multiline Action Inputs
When I switched the regex FROM the top one at /langchain/agents/conversational/base.py TO the bottom one at /langchain/agents/mrkl/base.py, it worked for me!
def _extract_tool_and_input(self, llm_output: str) -> Optional[Tuple[str, str]]:
if f"{self.ai_prefix}:" in llm_output:
return self.ai_prefix, llm_output.split(f"{self.ai_prefix}:")[-1].strip()
**regex = r"Action: (.*?)[\n]*Action Input: (.*)"
match = re.search(regex, llm_output)**
def get_action_and_input(llm_output: str) -> Tuple[str, str]:
"""Parse out the action and input from the LLM output.
Note: if you're specifying a custom prompt for the ZeroShotAgent,
you will need to ensure that it meets the following Regex requirements.
The string starting with "Action:" and the following string starting
with "Action Input:" should be separated by a newline.
"""
if FINAL_ANSWER_ACTION in llm_output:
return "Final Answer", llm_output.split(FINAL_ANSWER_ACTION)[-1].strip()
**regex = r"Action: (.*?)[\n]*Action Input: (.*)"
match = re.search(regex, llm_output, re.DOTALL)**
| conversational/agent has a regex not good for multiline Action Inputs coming from the LLM | https://api.github.com/repos/langchain-ai/langchain/issues/1645/comments | 4 | 2023-03-13T20:58:38Z | 2023-08-20T16:08:47Z | https://github.com/langchain-ai/langchain/issues/1645 | 1,622,249,283 | 1,645 |
[
"hwchase17",
"langchain"
]
| Looks like `BaseLLM` supports caching (via `langchain.llm_cache`), but `BaseChatModel` does not. | Add caching support to BaseChatModel | https://api.github.com/repos/langchain-ai/langchain/issues/1644/comments | 9 | 2023-03-13T20:34:16Z | 2023-07-04T10:06:20Z | https://github.com/langchain-ai/langchain/issues/1644 | 1,622,218,946 | 1,644 |
[
"hwchase17",
"langchain"
]
| Can the summarization chain be used with ChatGPT's API, `gpt-3.5-turbo`? I have tried the following two code snippets, but they result in this error.
```
openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 6063 tokens. Please reduce the length of the messages.
```
Trial 1
```
from langchain import OpenAI
from langchain.chains.summarize import load_summarize_chain
from langchain.docstore.document import Document
full_text = "The content of this article, https://nymag.com/news/features/mark-zuckerberg-2012-5/?mid=nymag_press"
model_name = "gpt-3.5-turbo"
llm = OpenAI(model_name=model_name, temperature=0)
documents = [Document(page_content=full_text)]
# Summarize the document by summarizing each document chunk and then summarizing the combined summary
chain = load_summarize_chain(llm, chain_type="map_reduce")
summary = chain.run(documents)
```
Trial 2
```
from langchain.chains.summarize import load_summarize_chain
from langchain.docstore.document import Document
from langchain.llms import OpenAIChat
full_text = "The content of this article, https://nymag.com/news/features/mark-zuckerberg-2012-5/?mid=nymag_press"
model_name = "gpt-3.5-turbo"
llm = OpenAIChat(model_name=model_name, temperature=0)
documents = [Document(page_content=full_text)]
# Summarize the document by summarizing each document chunk and then summarizing the combined summary
chain = load_summarize_chain(llm, chain_type="map_reduce")
summary = chain.run(documents)
```
I changed the Trial 1 snippet to the following but got the error below due to a list of prompts provided to the endpoint. Also, it appears that OpenAIChat doesn't have a `llm.modelname_to_contextsize` despite the endpoint not accepting more than `4097` tokens.
```
ValueError: OpenAIChat currently only supports single prompt, got ['Write a concise summary of the following:\n\n\n"
```
Trial 3
```
from langchain import OpenAI
from langchain.chains.summarize import load_summarize_chain
from langchain.docstore.document import Document
from langchain.text_splitter import RecursiveCharacterTextSplitter
full_text = "The content of this article, https://nymag.com/news/features/mark-zuckerberg-2012-5/?mid=nymag_press"
model_name = "gpt-3.5-turbo"
llm = OpenAI(model_name=model_name, temperature=0)
recursive_character_text_splitter = (
RecursiveCharacterTextSplitter.from_tiktoken_encoder(
encoding_name="cl100k_base" if model_name == "gpt-3.5-turbo" else "p50k_base",
chunk_size=4097
if model_name == "gpt-3.5-turbo"
else llm.modelname_to_contextsize(model_name),
chunk_overlap=0,
)
)
text_chunks = recursive_character_text_splitter.split_text(full_text)
documents = [Document(page_content=text_chunk) for text_chunk in text_chunks]
# Summarize the document by summarizing each document chunk and then summarizing the combined summary
chain = load_summarize_chain(llm, chain_type="map_reduce")
summary = chain.run(documents)
```
Do you have any ideas on what needs to be changed to allow OpenAI's ChatGPT to work for summarization? Happy to help if I can. | ChatGPT's API model, gpt-3.5-turbo, doesn't appear to work for summarization tasks | https://api.github.com/repos/langchain-ai/langchain/issues/1643/comments | 11 | 2023-03-13T19:54:13Z | 2023-04-23T08:30:14Z | https://github.com/langchain-ai/langchain/issues/1643 | 1,622,162,962 | 1,643 |
[
"hwchase17",
"langchain"
]
| Hey guys,
I'm trying to use langchain because the Tool class is so handy and initialize_agent works well with it, but I am having trouble finding any documentation that allows me to run this self-hosted locally. Everything seems to be remote, but I have a system that I know is capable of what I'm trying to do.
Is there any way to specify arguments to either SelfHosted class or even runhouse in order to specify that you want it running on the computer you're currently working on, on the gpu connected to that computer, instead of ssh-ing into a remote instance?
Thanks | langchain.llms SelfHostedPipeline and SelfHostedHuggingFaceLLM | https://api.github.com/repos/langchain-ai/langchain/issues/1639/comments | 4 | 2023-03-13T17:11:12Z | 2023-10-14T20:14:17Z | https://github.com/langchain-ai/langchain/issues/1639 | 1,621,910,729 | 1,639 |
[
"hwchase17",
"langchain"
]
| https://github.com/hwchase17/langchain/blob/cb646082baa173fdee7f2b1e361be368acef4e7e/langchain/document_loaders/googledrive.py#L120
Suggestion: Include optional param `includeItemsFromAllDrives` when calling `service.files().list()`
Reference: https://stackoverflow.com/questions/65388539/using-python-i-cant-access-shared-drive-folders-from-google-drive-api-v3 | GoogleDriveLoader not loading docs from Share Drives | https://api.github.com/repos/langchain-ai/langchain/issues/1634/comments | 0 | 2023-03-13T15:03:55Z | 2023-04-08T15:46:57Z | https://github.com/langchain-ai/langchain/issues/1634 | 1,621,682,210 | 1,634 |
[
"hwchase17",
"langchain"
]
| Hi, first off, I just want to say that I have been following this from the start, almost, and I see the amazing work you put in. It's an awesome project, well done!
Now that I finally have some time to dive in myself I'm hoping someone can help me bootstrap an idea. I want to chain chatgpt to Codex so that Chat will pass the coding task to codex for more accurate code and incorporate the code into it's answer and history/context.
Is this doable?
Many thanks! | Chaining Chat to Codex | https://api.github.com/repos/langchain-ai/langchain/issues/1631/comments | 3 | 2023-03-13T13:00:10Z | 2023-08-29T18:08:10Z | https://github.com/langchain-ai/langchain/issues/1631 | 1,621,441,455 | 1,631 |
[
"hwchase17",
"langchain"
]
| Dataframes (df) are generic containers to store different data-structures and pandas (or CSV) agent help manipulate dfs effectively. But current langchain implementation requires python3.9 to work with pandas agent because of the following invocation:
https://github.com/hwchase17/langchain/blob/6e98ab01e1648924db3bf8c3c2a093b38ec380bb/langchain/agents/agent_toolkits/pandas/base.py#L30
Google colab and many other easy-to-use platforms for developers however support python3.8 only as the stable version. If pandas agent could be supported by python3.8, it would immensely allow lot more to experiment and use the agent. | Python3.8 support for pandas agent | https://api.github.com/repos/langchain-ai/langchain/issues/1623/comments | 1 | 2023-03-13T04:01:58Z | 2023-03-15T02:44:00Z | https://github.com/langchain-ai/langchain/issues/1623 | 1,620,711,203 | 1,623 |
[
"hwchase17",
"langchain"
]
| Whereas it should be possible to filter by metadata :
- ```langchain.vectorstores.chroma.similarity_search``` takes a ```filter``` input parameter but do not forward it to ```langchain.vectorstores.chroma.similarity_search_with_score```
- ```langchain.vectorstores.chroma.similarity_search_by_vector``` don't take this parameter in input, although it could be very useful, without any additional complexity - and it would thus be coherent with the syntax of the two other functions | ChromaDB does not support filtering when using ```similarity_search``` or ```similarity_search_by_vector``` | https://api.github.com/repos/langchain-ai/langchain/issues/1619/comments | 3 | 2023-03-12T23:58:13Z | 2023-09-27T16:13:06Z | https://github.com/langchain-ai/langchain/issues/1619 | 1,620,559,206 | 1,619 |
[
"hwchase17",
"langchain"
]
| Hi!
Unstructured has support for providing in-memory text. Would be a great addition as currently, developers have to write and then read from a file if they want to load documents from memory.
Wouldn't mind opening the PR myself but want to make sure its a wanted feature before I get on it. | Allow unstructured loaders to accept in-memory text | https://api.github.com/repos/langchain-ai/langchain/issues/1618/comments | 7 | 2023-03-12T23:09:58Z | 2024-03-18T10:11:03Z | https://github.com/langchain-ai/langchain/issues/1618 | 1,620,546,553 | 1,618 |
[
"hwchase17",
"langchain"
]
| The @microsoft team used LangChain to guide [Visual ChatGPT](https://arxiv.org/pdf/2303.04671.pdf).
Here's the architecture of Visual ChatGPT:
<img width="619" alt="screen" src="https://user-images.githubusercontent.com/6625584/230680492-7d737584-c56b-43b2-8240-ed02aaf9ac00.png">
Think of **Prompt Manager** as **LangChain** and **Visual Foundation Models** as **LangChain Tools**.
Here are five LangChain use cases that can be unlocked with [Visual ChatGPT](https://github.com/microsoft/visual-chatgpt):
1. **Visual Query Builder**: Combine LangChain's SQL querying capabilities with Visual ChatGPT's image understanding, allowing users to query databases with natural language and receive visualized results, such as charts or graphs.
2. **Multimodal Conversational Agent**: Enhance LangChain chatbots with Visual ChatGPT's image processing abilities, allowing users to send images and receive relevant responses, image-based recommendations, or visual explanations alongside text.
3. **Image-Assisted Summarization**: Integrate Visual ChatGPT's image understanding with LangChain's summarization chains to create image-enhanced summaries, providing context and visual aids that complement the text-based summary.
4. **Image Captioning and Translation**: Combine Visual ChatGPT's image captioning with LangChain's language translation chains to automatically generate captions and translations for images, making visual content more accessible to a global audience.
5. **Generative Art Collaboration**: Connect Visual ChatGPT's image generation capabilities with LangChain's creative writing chains, enabling users to create collaborative artwork and stories that combine text and images in innovative ways. | Visual ChatGPT | https://api.github.com/repos/langchain-ai/langchain/issues/1607/comments | 2 | 2023-03-12T02:38:03Z | 2023-09-18T16:23:25Z | https://github.com/langchain-ai/langchain/issues/1607 | 1,620,211,493 | 1,607 |
[
"hwchase17",
"langchain"
]
| The new [blog post](https://github.com/huggingface/blog/blob/main/trl-peft.md) for implementing LoRA + RLHF is great. Would appreciate if the example scripts are public!
3 scripts mentioned in the blog posts:
[Script](https://github.com/lvwerra/trl/blob/peft-gpt-neox-20b/examples/sentiment/scripts/gpt-neox-20b_peft/s01_cm_finetune_peft_imdb.py) - Fine tuning a Low Rank Adapter on a frozen 8-bit model for text generation on the imdb dataset.
[Script](https://github.com/lvwerra/trl/blob/peft-gpt-neox-20b/examples/sentiment/scripts/gpt-neox-20b_peft/s02_merge_peft_adapter.py) - Merging of the adapter layers into the base model’s weights and storing these on the hub.
[Script](https://github.com/lvwerra/trl/blob/peft-gpt-neox-20b/examples/sentiment/scripts/gpt-neox-20b_peft/s03_gpt-neo-20b_sentiment_peft.py) - Sentiment fine-tuning of a Low Rank Adapter to create positive reviews.
cross-link [pr](https://github.com/huggingface/blog/pull/920) | trl-peft example scripts not visible to public | https://api.github.com/repos/langchain-ai/langchain/issues/1606/comments | 3 | 2023-03-11T22:57:53Z | 2023-09-10T21:07:16Z | https://github.com/langchain-ai/langchain/issues/1606 | 1,620,171,008 | 1,606 |
[
"hwchase17",
"langchain"
]
| Wanted to leave some observations similar to the chain of thought *smirk* that was mentioned over here for [JSON agents](https://github.com/hwchase17/langchain/issues/1409).
You can copy this snippet to test for yourself (Mine is in Colab atm)
```
# Install dependencies
!pip install huggingface_hub cohere --quiet
!pip install openai==0.27.0 --quiet
!pip install langchain==0.0.107 --quiet
# Initialize any api keys that are needed
os.environ["OPENAI_API_KEY"] = ""
os.environ["HUGGINGFACEHUB_API_TOKEN"] = ""
import os
from langchain import LLMChain, OpenAI, Cohere, HuggingFaceHub, Prompt, SQLDatabase, SQLDatabaseChain
from langchain.model_laboratory import ModelLaboratory
from langchain.agents import create_csv_agent, load_tools, initialize_agent, Tool
from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, SQLDatabase, SQLDatabaseChain
import requests
from langchain.agents import create_openapi_agent
from langchain.agents.agent_toolkits import OpenAPIToolkit
from langchain.llms.openai import OpenAI
from langchain.requests import RequestsWrapper
from langchain.tools.json.tool import JsonSpec
import yaml
import urllib
# Core LLM model
llm = OpenAI(temperature=0)
# Create an API agent
# LoveCraft - .yaml API
yaml_url = "https://raw.githubusercontent.com/APIs-guru/openapi-directory/main/APIs/randomlovecraft.com/1.0/openapi.yaml"
urllib.request.urlretrieve(yaml_url, "lovecraft.yml")
with open("/content/lovecraft.yml") as f:
data = yaml.load(f, Loader=yaml.FullLoader)
lovecraft_json_spec=JsonSpec(dict_=data, max_value_length=4000)
headers = {}
requests_wrapper=RequestsWrapper(headers=headers)
lovecraft_json_spec_toolkit = OpenAPIToolkit.from_llm(OpenAI(temperature=0), lovecraft_json_spec, requests_wrapper, verbose=True)
lovecraft_agent_executor = create_openapi_agent(
llm=OpenAI(temperature=0),
toolkit=lovecraft_json_spec_toolkit,
verbose=True
)
tools = [
# Tool(
# name = "Search",
# func=search.run,
# description="useful for when you need to answer questions about current events. You should ask targeted questions"
# ),
Tool(
name="Lovecraft API - randomlovecraft.com",
func=lovecraft_agent_executor.run,
description="Access randomlovecraft.com documentation, process API responses, and perform GET and POST requests to randomlovecraft.com"
)
]
mrkl = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True)
mrkl.run("Can you get me a random Lovecraft sentence?")
mrkl.run("Can you get me a random Lovecraft sentence using /sentences`?")
mrkl.run("Using the `https://randomlovecraft.com/api` base URL, make a `GET` request random Lovecraft sentence using `/sentences`. Use a limit of 1.")
```
I pulled the openAPI example from the docs to do some testing on my own. I wanted to see if it was possible for an agent to fetch and process API docs and to be able to return the response for use in other applications. Some of the key noted things:
# 🐘Huge API Responses
If your response is large, it will directly affect the ability for the agent to ouput the response. This is because it goes into the token limit. Here the `Observation` is included in the token limit:
 a possible workaround for this could be routing the assembled requests over to a server / glue tool like [make](https://www.make.com/en?pc=dougjoe). Still haven't looked at other tools to see what's possible! Perhaps just wrapping that request in another request to a server would suffice.
# 🤔Verbosity is Better?
There is a weird grey area between too detailed and not detailed enough when it comes to the user prompt: I.e. check these 3 prompts out:
## ✅Test1:
### input:
```
Can you get me a random Lovecraft sentence
```
### output:
```
> Finished chain.
A locked portfolio, bound in tanned human skin, held certain unknown and unnamable drawings which it was rumoured Goya had perpetrated but dared not acknowledge.
```
## 🙅♂️Test2:
### input:
```
Can you get me a random Lovecraft sentence using /sentences?
```
### output:
```
> Finished chain.
To get a random Lovecraft sentence, make a GET request to https://randomlovecraft.com/api/sentences?limit=1.
```
## ✅Test3:
### input:
```
Using the `https://randomlovecraft.com/api` base URL, make a `GET` request random Lovecraft sentence using `/sentences`. Use a limit of 1.
```
### output:
```
> Finished chain.
The sentence returned from the GET request to https://randomlovecraft.com/api/sentences?limit=1 is "Derby did not offer to relinquish the wheel, and I was glad of the speed with which Portsmouth and Newburyport flashed by."
```
# 🔑Expensive Key Search
I ran out of my free OpenAI credits in a matter of 5 hours of testing various tools with Langchain 😅. Be mindful of how expensive this is because each action uses a request to the core LLM. Specifically, I noticed that `json_spec_list_keys` is a simple layer-by-layer search where each nested object observation costs an LLM request. This can add up especially when looping. I guess workaround wise, you'd probably want to either use a lighter/cheaper model for the agent or have someway to "store" `Action Inputs` and `Observations`. I guess that can be [solved here](https://langchain.readthedocs.io/en/latest/modules/memory/types/entity_summary_memory.html), haven't implemented yet.
All in all, this tool is absolutely incredible! So happy I stumbled across this library!
| Observations and Limitations of API Tools | https://api.github.com/repos/langchain-ai/langchain/issues/1603/comments | 2 | 2023-03-11T21:09:46Z | 2023-11-08T11:32:09Z | https://github.com/langchain-ai/langchain/issues/1603 | 1,620,146,572 | 1,603 |
[
"hwchase17",
"langchain"
]
| In the current `ConversationBufferWindowMemory`, only the last `k` interactions are kept while the conversation history prior to that is deleted.
A memory similar to this, which keeps all conversation history below `max_token_limit` and deletes conversation history from the beginning when it exceeds the limit, could be useful.
This is similar to `ConversationSummaryBufferMemory`, but instead of summarizing the conversation history when it exceeds `max_token_limit`, it simply discards it.
It is a very simple memory, but I think it could be useful in some situations. | Idea: A memory similar to ConversationBufferWindowMemory but utilizing token length | https://api.github.com/repos/langchain-ai/langchain/issues/1598/comments | 3 | 2023-03-11T12:05:23Z | 2023-09-10T16:42:36Z | https://github.com/langchain-ai/langchain/issues/1598 | 1,619,983,645 | 1,598 |
[
"hwchase17",
"langchain"
]
| I have a use case where I want to be able to create multiple indices of the same set of documents, essentially each index will be built based on some criteria so that I can query from the right set of documents. (I am using FAISS at the moment which does not have great options for filtering within one giant index so they recommend creating multiple indices)
It would be expensive to generate embeddings by calling OpenAI APIs for each document multiple times to populate each of the indices. I think having an interface similar to `add_texts` and `add_documents` which allows the user to pass the embeddings explicitly might be an option to achieve this?
As I write, I think I might be able to get around by passing a wrapper function to FAISS as the embedding function which can internally cache the embeddings for each document and avoid the duplicate calls to the embeddings API.
However, creating this issue in case others also think that an `add_embeddings` API or something similar sounds like a good idea? | Add support for creating index by passing embeddings explicitly | https://api.github.com/repos/langchain-ai/langchain/issues/1597/comments | 1 | 2023-03-11T10:41:51Z | 2023-08-24T16:14:51Z | https://github.com/langchain-ai/langchain/issues/1597 | 1,619,963,356 | 1,597 |
[
"hwchase17",
"langchain"
]
| The SemanticSimilarityExampleSelector doesn't behave as expected. The SemanticSimilarityExampleSelector is supposed to select examples based on which examples are most similar to the inputs.
The same issue is found in the [official documentation](https://langchain.readthedocs.io/en/latest/modules/prompts/examples/example_selectors.html#similarity-exampleselector)
Here is the code and outputs
<img width="644" alt="Capture 1" src="https://user-images.githubusercontent.com/52585642/224474277-9155fa44-95bc-4f37-a87e-83c4867966d3.PNG">
<img width="646" alt="Capture 2" src="https://user-images.githubusercontent.com/52585642/224474281-d37edfed-2922-412c-be21-e353c64b6e21.PNG">
Input 'fat' is a measurement, so should select the tall/short example, but we still have happy/sad example
<img width="653" alt="Capture 3" src="https://user-images.githubusercontent.com/52585642/224474286-7793c4e2-5e5e-4ba1-9f97-a3a74bc278ef.PNG">
When we add another 'feeling' example, it is not selected in the output while we have a 'feeling' input.
<img width="652" alt="Capture 4" src="https://user-images.githubusercontent.com/52585642/224474295-265690a7-5f7f-4ef2-bceb-72270190311a.PNG">
| bug with SemanticSimilarityExampleSelector | https://api.github.com/repos/langchain-ai/langchain/issues/1596/comments | 2 | 2023-03-11T08:52:10Z | 2023-09-18T16:23:35Z | https://github.com/langchain-ai/langchain/issues/1596 | 1,619,936,437 | 1,596 |
[
"hwchase17",
"langchain"
]
| I've been trying things out from the [docs](https://langchain.readthedocs.io/en/latest/modules/memory/getting_started.html) and encountered this error when trying to import the following:
`from langchain.memory import ChatMessageHistory`
ImportError: cannot import name 'AIMessage' from 'langchain.schema'
`from langchain.memory import ConversationBufferMemory`
ImportError: cannot import name 'AIMessage' from 'langchain.schema' | ImportError: cannot import name 'AIMessage' from 'langchain.schema' | https://api.github.com/repos/langchain-ai/langchain/issues/1595/comments | 3 | 2023-03-11T05:09:26Z | 2023-03-14T15:57:49Z | https://github.com/langchain-ai/langchain/issues/1595 | 1,619,886,185 | 1,595 |
[
"hwchase17",
"langchain"
]
| null | Allow encoding such as "encoding='utf8' " to be passed into TextLoader if the file is not the default system encoding. | https://api.github.com/repos/langchain-ai/langchain/issues/1593/comments | 9 | 2023-03-11T00:25:28Z | 2024-05-13T16:07:22Z | https://github.com/langchain-ai/langchain/issues/1593 | 1,619,793,517 | 1,593 |
[
"hwchase17",
"langchain"
]
| `add_texts` function takes `texts` which are meant to be added to the OpenSearch vector domain.
These texts are then passed to compute the embeddings using following code,
```
embeddings = [
self.embedding_function.embed_documents(list(text))[0] for text in texts
]
```
which doesn't create expected embeddings with OpenAIEmbeddings() because the text isn't passed correctly.
whereas,
`embeddings = self.embedding_function.embed_documents(texts)`
works as expected. | add_texts function in OpenSearchVectorSearch class doesn't create embeddings as expected | https://api.github.com/repos/langchain-ai/langchain/issues/1592/comments | 2 | 2023-03-11T00:18:18Z | 2023-09-18T16:23:41Z | https://github.com/langchain-ai/langchain/issues/1592 | 1,619,789,882 | 1,592 |
[
"hwchase17",
"langchain"
]
| Looks like a similar issue as what OpenAI had when the model was introduced(arguments like best_of, logprobs weren't supported). | AzureOpenAI doesn't work with GPT 3.5 Turbo deployed models | https://api.github.com/repos/langchain-ai/langchain/issues/1591/comments | 18 | 2023-03-11T00:13:29Z | 2023-10-25T16:10:02Z | https://github.com/langchain-ai/langchain/issues/1591 | 1,619,787,442 | 1,591 |
[
"hwchase17",
"langchain"
]
| The example in the [documentation](https://langchain.readthedocs.io/en/latest/modules/agents/agent_toolkits/json.html) doesn't state how to use them. I have a json [file](https://gist.github.com/Smyja/aaaeb3ef6f2af68c27f0e1ea42bfb52d) that is basically a list of dictionaries, how can i use the tools to access the text keys or all keys to find an answer to a question? @agola11 | How to use JsonListKeysTool and JsonGetValueTool for json agent | https://api.github.com/repos/langchain-ai/langchain/issues/1589/comments | 1 | 2023-03-10T21:06:13Z | 2023-09-10T16:42:51Z | https://github.com/langchain-ai/langchain/issues/1589 | 1,619,623,956 | 1,589 |
[
"hwchase17",
"langchain"
]
| version: 0.0.106
OpenAI seems to no longer support max_retries.
https://platform.openai.com/docs/api-reference/completions/create?lang=python
| ERROR:root:'OpenAIEmbeddings' object has no attribute 'max_retries' | https://api.github.com/repos/langchain-ai/langchain/issues/1585/comments | 10 | 2023-03-10T20:12:07Z | 2023-04-14T05:13:51Z | https://github.com/langchain-ai/langchain/issues/1585 | 1,619,559,028 | 1,585 |
[
"hwchase17",
"langchain"
]
| I'm wondering if folks have thought about easy ways to upload a file system as a prompt, I know they can exceed the char limit but it'd be very useful for applications like Github Copilot where I'd like to upload my entire codebase and even installed dependencies as background before prompting for specific actions like: "could you please test this function"
fsspec is amazing and I wonder if it makes sense to support a new tool for it which would allow adding in a local or remote file trivially - I'm happy to contribute this myself
One area where I could use some guidance is how to represent folders to popular LLMs because naive encodings like the below have given me really mixed resuts on ChatGPT
```
# path/to/file1
code
code
code
# path/to/file2
code
code
code
``` | Upload filesystem | https://api.github.com/repos/langchain-ai/langchain/issues/1584/comments | 0 | 2023-03-10T19:44:55Z | 2023-03-10T20:03:33Z | https://github.com/langchain-ai/langchain/issues/1584 | 1,619,529,926 | 1,584 |
[
"hwchase17",
"langchain"
]
| It would be nice if we could at least get the final SQL and results from the query like we do for SQLDatabaseChain.
I'll try to put together a pull request. | Allow SQLDatabaseSequentialChain to return some Intermedite steps | https://api.github.com/repos/langchain-ai/langchain/issues/1582/comments | 1 | 2023-03-10T17:11:32Z | 2023-08-24T16:14:57Z | https://github.com/langchain-ai/langchain/issues/1582 | 1,619,320,010 | 1,582 |
[
"hwchase17",
"langchain"
]
| There are [rumors](https://twitter.com/transitive_bs/status/1628118163874516992) that GPT-4 (code name DV?) will feature a whopping 32k max context length. It's time to build abstractions for new token lengths.
 | Token Lengths | https://api.github.com/repos/langchain-ai/langchain/issues/1580/comments | 2 | 2023-03-10T15:10:02Z | 2023-04-07T21:17:39Z | https://github.com/langchain-ai/langchain/issues/1580 | 1,619,135,778 | 1,580 |
[
"hwchase17",
"langchain"
]
| SQLAlchemy v2 is out and has important new features.
I suggest relaxing the dependency requirement from `SQLAlchemy = "^1"` to `SQLAlchemy = ">=1.0, <3.0"`
| Missing support for SQLAlchemy v2 | https://api.github.com/repos/langchain-ai/langchain/issues/1578/comments | 2 | 2023-03-10T09:12:13Z | 2023-03-17T04:55:37Z | https://github.com/langchain-ai/langchain/issues/1578 | 1,618,619,296 | 1,578 |
[
"hwchase17",
"langchain"
]
| It's currently not possible to pass a custom deployment name as model/deployment names are hard-coded as "text-embedding-ada-002" in variables within the class definition.
In Azure OpenAI, the deployment names can be customized and that doesn't work with OpenAIEmbeddings class.
There is proper Azure support for LLM OpenAI, but it is missing for Embeddings. | Missing Azure OpenAI support for "OpenAIEmbeddings" | https://api.github.com/repos/langchain-ai/langchain/issues/1577/comments | 8 | 2023-03-10T09:06:05Z | 2023-09-27T16:13:11Z | https://github.com/langchain-ai/langchain/issues/1577 | 1,618,610,444 | 1,577 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.