issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
]
| ### Feature request
Currently when we initialize an agent we can only pass one llm which is responsible for both tool selection and chatting. There should be a feature that would allows us to define seperate llms for tool selection and chatting respectively.
### Motivation
It has been observed that text llms are better at decision making which is helpful when we have multiple tools but their chat responses are not that impressive. On the other hand chat models are better at chatting and but not as good at decision making which causes problems when we have multiple tools. Therefore, I want to harness the good of both in single agent.
### Your contribution
Yes I can submit a PR if certain guidance is provided. | Separate LLMs for Tool Selection and Chatting in Agent Initialization | https://api.github.com/repos/langchain-ai/langchain/issues/8855/comments | 1 | 2023-08-07T07:38:07Z | 2023-11-13T16:06:11Z | https://github.com/langchain-ai/langchain/issues/8855 | 1,838,870,679 | 8,855 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello! I am trying to use LangSmith to test my LLM. I've created a dataset, but I'm confused, as this dataset doesn't seem to work even though I am using `StuffDocumentsChain` in testing (I think) the same way I'm using it in production. Here's how I call it in production:
```
docs_chain = StuffDocumentsChain(
llm_chain=LLMChain(
llm=ChatOpenAI(
temperature=0,
),
prompt=prompt,
verbose=True,
),
document_variable_name="context",
verbose=True,
document_prompt=document_prompt,
)
...
answer = await docs_chain.arun(
input_documents=llm_context["docs"], **llm_context["inputs"]
)
...
```
So, this accept 4 inputs (I believe coming from `input_documents` and `llm_context["inputs"]`: `'question', 'chat_history', 'input_documents', 'organization_uuid'`
However, when I want to test this with LangSmith I'm trying to do:
```
def create_chain():
messages = [SystemMessagePromptTemplate.from_template(system_prompt)]
messages.append(HumanMessagePromptTemplate.from_template('{question}'))
retrieval_prompt = ChatPromptTemplate.from_messages(messages)
combine_docs_chain = StuffDocumentsChain(
llm_chain=LLMChain(
llm=ChatOpenAI(
temperature=0,
),
prompt=retrieval_prompt,
verbose=True,
),
document_variable_name="context",
verbose=True,
document_prompt=document_prompt,
)
return combine_docs_chain
...
await arun_on_dataset(
client=client,
dataset_name=dataset_name,
llm_or_chain_factory=create_chain,
evaluation=eval_config,
verbose=True,
)
```
but I'm getting:
```
langchain.smith.evaluation.runner_utils.InputFormatError: Example inputs do not match chain input keys. Please provide an input_mapper to convert the example.inputs to a compatible format for the chain you wish to evaluate.Expected: ['input_documents']. Got: dict_keys(['question', 'chat_history', 'input_documents', 'organization_uuid'])
```
I don't understand this, as I'm calling this in the same way... shouldn't those input map in the same way?
As the error states, I can also pass an `input_mapper` to just convert my example inputs into the required format, but then I would be throwing away the `question`, etc., and I want that `question` to be part of the test.
I'm probably thinking about this wrong. Let me know what you would do here, thank you!
### Suggestion:
_No response_ | Issue: Having trouble passing question with LangSmith | https://api.github.com/repos/langchain-ai/langchain/issues/8854/comments | 1 | 2023-08-07T07:08:48Z | 2023-08-08T03:07:45Z | https://github.com/langchain-ai/langchain/issues/8854 | 1,838,826,396 | 8,854 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi there, I am recently using Langchain to build my toy chatbot. I used both **Qdrant** cloud storage and local storage for test. However, when I am using local storage for **vector_store** and **search_kwargs**, everything is good. But, when I switch to the Qdrant cloud storage, the retriever of vectorDB is reporting no machtching **search_kwargs**. My partial codes shown below.
<img width="947" alt="image" src="https://github.com/langchain-ai/langchain/assets/59675745/50d0e930-38a8-4d69-b294-656435a5fdc3">
The error:
<img width="1012" alt="image" src="https://github.com/langchain-ai/langchain/assets/59675745/f800c82c-ac75-4158-8dc2-ae32a271b7c0">
### Suggestion:
_No response_ | Issue: AssertionError: Unknown arguments: ['fetch_k', 'maximal_marginal_relevance'] when specifyig search_kwargs ini from_llm() function call | https://api.github.com/repos/langchain-ai/langchain/issues/8852/comments | 1 | 2023-08-07T06:33:40Z | 2023-08-07T16:20:46Z | https://github.com/langchain-ai/langchain/issues/8852 | 1,838,777,588 | 8,852 |
[
"hwchase17",
"langchain"
]
| ### Feature request
```
from langchain.indexes import GraphIndexCreator
from langchain.graphs.networkx_graph import KnowledgeTriple
index_creator = GraphIndexCreator(llm=llm)
graph = index_creator.from_text('text to RDF')
graph.add_triple(KnowledgeTriple('aaa', '10', 'bbb'))
graph.add_triple(KnowledgeTriple('ccc', '30', 'ddd'))
graph.add_triple(KnowledgeTriple('eee', '10', 'aaa'))
graph.add_triple(KnowledgeTriple('ffff', '5', 'ddd'))
```
Using above code I have created a sample Networkx graph. Now I want to visualize this graph using pyvis/matplotlib. How can I?
### Motivation
Graph visualization support
### Your contribution
Graph visualization support | Visualize the Networkx Entity Graph created via the GraphIndexCreator class using pyvis/matplotlib | https://api.github.com/repos/langchain-ai/langchain/issues/8851/comments | 2 | 2023-08-07T06:20:54Z | 2023-11-13T16:06:16Z | https://github.com/langchain-ai/langchain/issues/8851 | 1,838,761,710 | 8,851 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
This is more like a question than feature request.
Google Vertex AI has support for some task specific models which are specially trained for certain action.
I know Langchain has support for LLM based models in general but do you ever plan to integrate these task specific models in your library? This (by Google's definition) is a Language Service Model but might not be powered by Gen AI technology
<img width="932" alt="image" src="https://github.com/langchain-ai/langchain/assets/97156231/867a3101-e429-4716-a3ae-62c3c4c90813">
### Suggestion:
_No response_ | Issue: Support for Custom VertexAI models | https://api.github.com/repos/langchain-ai/langchain/issues/8850/comments | 1 | 2023-08-07T05:22:00Z | 2023-11-13T16:06:20Z | https://github.com/langchain-ai/langchain/issues/8850 | 1,838,703,116 | 8,850 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: `0.0.254`
Python version: `3.10.2`
Elasticsearch version: `7.17.0`
System Version: macOS 13.4 (22F66)
Model Name: MacBook Pro
Model Identifier: Mac14,10
Chip: Apple M2 Pro
Total Number of Cores: 12 (8 performance and 4 efficiency)
Memory: 32 GB
### Who can help?
@agola11 @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
run this using `python3 script.py`
`script.py`:
```python
import os
from langchain.chains import RetrievalQA
from langchain.chat_models import ChatOpenAI
from langchain.document_loaders import TextLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import ElasticVectorSearch
def main():
text_path = "some-test.txt"
loader = TextLoader(text_path)
data = loader.load()
text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
chunk_size=1000, chunk_overlap=0
) # I have also tried various chunk sizes, but still have the same error
documents = text_splitter.split_documents(data)
api_key = "..."
embeddings = OpenAIEmbeddings(openai_api_key=api_key)
os.environ["ELASTICSEARCH_URL"] = "..."
db = ElasticVectorSearch.from_documents(
documents,
embeddings,
index_name="laurad-test",
)
print(db.client.info())
db = ElasticVectorSearch(
index_name="laurad-test",
embedding=embeddings,
elasticsearch_url="..."
)
qa = RetrievalQA.from_chain_type(
llm=ChatOpenAI(temperature=0, openai_api_key=api_key),
chain_type="stuff",
retriever=db.as_retriever(),
)
query = "Hi
qa.run(query) # Error here
if __name__ == "__main__":
main()
```
Error traceback:
```
RequestError Traceback (most recent call last)
Cell In[8], line 2
1 query = "What is ARB in NVBugs?"
----> 2 qa.run(query)
File [/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/base.py:451](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/base.py:451), in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
449 if len(args) != 1:
450 raise ValueError("`run` supports only one positional argument.")
--> 451 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
452 _output_key
453 ]
455 if kwargs and not args:
456 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
457 _output_key
458 ]
File [/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/base.py:258](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/base.py:258), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
256 except (KeyboardInterrupt, Exception) as e:
257 run_manager.on_chain_error(e)
--> 258 raise e
259 run_manager.on_chain_end(outputs)
260 final_outputs: Dict[str, Any] = self.prep_outputs(
261 inputs, outputs, return_only_outputs
262 )
...
--> 315 raise HTTP_EXCEPTIONS.get(status_code, TransportError)(
316 status_code, error_message, additional_info
317 )
RequestError: RequestError(400, 'search_phase_execution_exception', "class_cast_exception: class org.elasticsearch.index.fielddata.ScriptDocValues$Doubles cannot be cast to class org.elasticsearch.xpack.vectors.query.VectorScriptDocValues$DenseVectorScriptDocValues (org.elasticsearch.index.fielddata.ScriptDocValues$Doubles is in unnamed module of loader 'app'; org.elasticsearch.xpack.vectors.query.VectorScriptDocValues$DenseVectorScriptDocValues is in unnamed module of loader java.net.FactoryURLClassLoader @7808fb9)")
```
### Expected behavior
I expect to ask questions and have answers provided back using the `langchain.chains.retrieval_qa.base.RetrievalQA` class. However, I am getting an error 400 when trying to query the LLM.
**Note**: I do not get the same error when using ChromaDB or OpenSearch as the retriever. | RequestError: RequestError(400, 'search_phase_execution_exception...) with ElasticSearch as Vector store when querying | https://api.github.com/repos/langchain-ai/langchain/issues/8849/comments | 11 | 2023-08-07T04:58:11Z | 2024-03-13T19:57:19Z | https://github.com/langchain-ai/langchain/issues/8849 | 1,838,682,846 | 8,849 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
currently,
im using openAI GPT3.5 for models and chroma DB to save vector.
i have some pdf documents which is have 2000 total pages. the pages will increase about 100 pages every day.
and at the end, the total target page that must be inputed is around 20.000 pages.
i have 2 question:
1. i worry about token limitations from GPT3.5 . any idea / suggestion to improve token limitations for many source documents?
2. for chromaDB, are chromaDB has limitations for saving vector?
i already using text splitter
`textSplitter = RecursiveCharacterTextSplitter(chunk_size=1536, chunk_overlap=200,separators=["#####","\n\n","\n","====="])`
i accept all idea and suggestion for this, thanks for advice
please take a note: i cant make summary from every documents.
i accept all idea and suggestion for this, thanks for advice
please take a note: i cant make summary from every documents.
### Suggestion:
_No response_ | Issue: ChromaDB document and token openAI limitations | https://api.github.com/repos/langchain-ai/langchain/issues/8846/comments | 2 | 2023-08-07T03:48:48Z | 2023-11-13T16:06:25Z | https://github.com/langchain-ai/langchain/issues/8846 | 1,838,622,804 | 8,846 |
[
"hwchase17",
"langchain"
]
| langchain==0.0.20
from langchain.schema import messages_to_dict
----> 1 from langchain.schema import messages_to_dict
ModuleNotFoundError: No module named 'langchain.schema' | ModuleNotFoundError: No module named 'langchain.schema' | https://api.github.com/repos/langchain-ai/langchain/issues/8843/comments | 4 | 2023-08-07T02:25:35Z | 2023-11-13T16:06:30Z | https://github.com/langchain-ai/langchain/issues/8843 | 1,838,563,345 | 8,843 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: 0.0.253
Python:3.11
### Who can help?
@agola11 @hwchase17 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
1. Environment Setup:
Ensure you're using Python 3.11.
Install the necessary libraries and dependencies:
```bash
pip install fastapi uvicorn aiohttp langchai
```
2. APIChain Initialization:
Set up the APIChain utility using the provided API documentation and the chosen language model:
```python
from langchain import APIChain
chain = APIChain.from_llm_and_api_docs(api_docs=openapi.MY_API_DOCS, llm=choosen_llm, verbose=True, headers=headers)
```
3. Run the FastAPI application:
Use a tool like Uvicorn to start your FastAPI app:
```lua
uvicorn your_app_name:app --reload
```
4. Trigger the API Endpoint:
Make a request to the FastAPI endpoint that uses the APIChain utility. This could be through tools like curl, Postman, or directly from a browser, depending on how your API is set up.
Execute the Callback:
Inside the relevant endpoint, ensure you have the following snippet:
```python
with get_openai_callback() as cb:
response = await chain.arun(user_query)
```
5. Observe the Error:
You should encounter a TypeError indicating a conflict with the auth argument in the aiohttp.client.ClientSession.request() method. Because of providing header to APIChain and running it with ```arun``` method.
### Expected behavior
Request Execution:
The chain.arun(user_query) method should interact with the intended external service or API without any issues.
The auth parameter, when used in the underlying request to the external service (in aiohttp), should be correctly applied without conflicts or multiple definitions. | TypeError Due to Duplicate 'auth' Argument in aiohttp Request when provide header to APIChain | https://api.github.com/repos/langchain-ai/langchain/issues/8842/comments | 2 | 2023-08-06T23:55:31Z | 2023-09-25T09:40:35Z | https://github.com/langchain-ai/langchain/issues/8842 | 1,838,467,546 | 8,842 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: `0.0.240`
Python version: `3.10.2`
Elasticsearch version: `7.17.0`
System Version: macOS 13.4 (22F66)
Model Name: MacBook Pro
Model Identifier: Mac14,10
Chip: Apple M2 Pro
Total Number of Cores: 12 (8 performance and 4 efficiency)
Memory: 32 GB
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
run this using `python3 script.py`
`script.py`:
```python
import os
from langchain.chains import RetrievalQA
from langchain.chat_models import ChatOpenAI
from langchain.document_loaders import TextLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import ElasticVectorSearch
def main():
text_path = "some-test.txt"
loader = TextLoader(text_path)
data = loader.load()
text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
chunk_size=1000, chunk_overlap=0
) # I have also tried various chunk sizes, but still have the same error
documents = text_splitter.split_documents(data)
api_key = "..."
embeddings = OpenAIEmbeddings(openai_api_key=api_key)
os.environ["ELASTICSEARCH_URL"] = "..."
db = ElasticVectorSearch.from_documents(
documents,
embeddings,
index_name="laurad-test",
)
print(db.client.info())
db = ElasticVectorSearch(
index_name="laurad-test",
embedding=embeddings,
elasticsearch_url="..."
)
qa = RetrievalQA.from_chain_type(
llm=ChatOpenAI(temperature=0, openai_api_key=api_key),
chain_type="stuff",
retriever=db.as_retriever(),
)
if __name__ == "__main__":
main()
```
Error traceback:
```
Traceback (most recent call last):
File "/Users/laurad/dev/LLM/public_test.py", line 46, in <module>
main()
File "/Users/laurad/dev/LLM/public_test.py", line 41, in main
retriever=db.as_retriever(),
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 458, in as_retriever
tags.extend(self.__get_retriever_tags())
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 452, in __get_retriever_tags
if self.embeddings:
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/vectorstores/elastic_vector_search.py", line 158, in embeddings
return self.embeddings
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/vectorstores/elastic_vector_search.py", line 158, in embeddings
return self.embeddings
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/vectorstores/elastic_vector_search.py", line 158, in embeddings
return self.embeddings
[Previous line repeated 993 more times]
RecursionError: maximum recursion depth exceeded
```
### Expected behavior
I expect to ask questions and have answers provided back using the `langchain.chains.retrieval_qa.base.RetrievalQA` class. However, I am getting a `RecursionError` when creating the retrieval chain.
**Note**: I do not get the same error when using ChromaDB or OpenSearch as the retriever.
| RecursionError: maximum recursion depth exceeded with ElasticVectorSearch during RetrievalQA | https://api.github.com/repos/langchain-ai/langchain/issues/8836/comments | 2 | 2023-08-06T19:28:17Z | 2023-08-06T21:40:49Z | https://github.com/langchain-ai/langchain/issues/8836 | 1,838,335,052 | 8,836 |
[
"hwchase17",
"langchain"
]
| I initialized the tool and agent as following:
```
llm = ChatOpenAI(
openai_api_key=OPENAI_API_KEY,
model_name = 'gpt-3.5-turbo',
temperature=0.0,
)
conversational_memory = ConversationBufferWindowMemory(
memory_key='chat_history',
k=6,
return_messages=True
)
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type='stuff',
retriever=vectorstore.as_retriever(),
)
tools = [
Tool(
name='Fressnapf Knowledge Base',
func=qa.run,
description=("""
Only use this tool when you need to know more about Fressnapf service and products
""")
)
]
agent = initialize_agent(
agent='chat-conversational-react-description',
tools=tools,
llm=llm,
verbose=True,
max_iterations=3,
early_stopping_method='generate',
memory=conversational_memory,
)
```
Now, when I'm asking the agent some basic questions he always uses the tool even if it's not useful to for answering the question.
After running `agent('Do you have ideas for a birthday gift?')` the agent directly uses the tool.
```
> Entering new AgentExecutor chain...
{
"action": "Fressnapf Knowledge Base",
"action_input": "birthday gift ideas"
}
Observation: Here are some birthday gift ideas:
1. Personalized photo album or picture frame
2. Spa or wellness gift set
3. Subscription to a favorite magazine or streaming service
4. Cooking or baking class
5. Outdoor adventure experience, such as a hiking or kayaking trip
6. Customized jewelry or accessories
7. Book or a set of books by their favorite author
8. Tickets to a concert, play, or sports event
9. A gift card to their favorite restaurant or store
10. A thoughtful handwritten letter or card expressing your love and appreciation.
Thought:{
"action": "Final Answer",
"action_input": "Here are some birthday gift ideas:\n\n1. Personalized photo album or picture frame\n2. Spa or wellness gift set\n3. Subscription to a favorite magazine or streaming service\n4. Cooking or baking class\n5. Outdoor adventure experience, such as a hiking or kayaking trip\n6. Customized jewelry or accessories\n7. Book or a set of books by their favorite author\n8. Tickets to a concert, play, or sports event\n9. A gift card to their favorite restaurant or store\n10. A thoughtful handwritten letter or card expressing your love and appreciation."
}
```
I've provided a very clear description for the tool and when the agent should be using it, but it seems like as if the agent ignoring my instructions.
Does anyone have any idea how to solve this issue? | Agent using tool when it's not needed | https://api.github.com/repos/langchain-ai/langchain/issues/8827/comments | 11 | 2023-08-06T16:17:01Z | 2024-05-13T11:03:47Z | https://github.com/langchain-ai/langchain/issues/8827 | 1,838,264,689 | 8,827 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I want to offer to change refine prompts to make a summary more correct.
### Motivation
In some cased (I can't say when exactly, but one time per 10-20) I saw that refine LLM added some additional text for summary.
Example:
> Since the provided context does not add any relevant information to the original summary, we will return the original summary:
> ... summary here...
On other cases I even saw output of refine prompt as:
>The existing summary is more comprehensive and provides a clear understanding of <my topic here>. The additional context provided does not contribute to the summary and can be disregarded.
As result I have this text as summary, but not my summary itself.
### Your contribution
I want to propose to use other prompts and check JSON result.
```
refine_initial_prompt_template = """\
Write a concise summary of the text (delimited with XML tags).
Please provide result in JSON format:
{{
"summary": "summary here"
}}
<text>
{text}
</text>
"""
refine_combine_prompt_template = """\
Your job is to produce a final summary. We have provided an existing summary up to a certain point (delimited with XML tags).
We have the opportunity to refine the existing summary (only if needed) with some more context (delimited with XML tags).
Given the new context, refine the original summary (only if new context is useful) otherwise say that it's not useful.
Please provide result in JSON format:
{{
"not_useful": "True if new context was not useful, False if new content was used",
"refined_summary": "refined summary here if new context was useful"
}}
<existing_summary>
{existing_summary}
</existing_summary>
<more_context>
{more_context}
</more_context>
"""
```
Now I have JSON with field not_useful=True and can easy ignore it. | Refine prompt modification | https://api.github.com/repos/langchain-ai/langchain/issues/8824/comments | 5 | 2023-08-06T15:25:06Z | 2024-05-13T16:07:49Z | https://github.com/langchain-ai/langchain/issues/8824 | 1,838,248,487 | 8,824 |
[
"hwchase17",
"langchain"
]
| I am trying to utilize Python Repl Tool in langchain with a CSV file and send me the answer based on the CSV file content. The problem is that it gets the action_input step of writing the code to execute right. However, it fails to answer because it couldn't determine the dataframe it should run the code on.
For example, I ask it about the longest name in a dataframe containing a column named "name" and it returns the following:
Entering new AgentExecutor chain...
{
"action": "python_repl_ast",
"action_input": "import pandas as pd\n\n# Assuming the dataset is stored in a pandas DataFrame called 'data'\nnames = data['NAMES']\nlongest_name = max(names, key=len)\nlongest_name"
}
Observation: NameError: name 'data' is not defined
Thought:{
"action": "Final Answer",
**"action_input": "I apologize for the confusion. Unfortunately, I do not have access to the dataset required to find the longest name. Is there anything else I can assist you with?"**
}
Is there a way to pass the dataframe object to the Pandas REPL tool in order for the code to execute properly and return me the answer? This problem is encountered while using the GPT-3.5-turbo API model.
This is the full code:
```
df = pd.read_csv(file_path)
tools = [PythonAstREPLTool(locals={"df": df})]
agent = initialize_agent(
agent='chat-conversational-react-description',
tools=tools,
llm=llm,
verbose=True,
max_iterations=3,
early_stopping_method='generate',
memory=conversational_memory
)
query = 'What is the longest name?'
print(agent(query))
```
| How to pass a CSV file or a dataframe to Pandas REPL tool in Langchain? | https://api.github.com/repos/langchain-ai/langchain/issues/8821/comments | 3 | 2023-08-06T12:26:01Z | 2023-11-14T16:06:09Z | https://github.com/langchain-ai/langchain/issues/8821 | 1,838,189,640 | 8,821 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
guys if not already logged.. I had to uninstall unstructured module and back it off to lower ver. 0.7.12. python3 -m pip install unstructured==0.7.12
### Suggestion:
maybe this needs attention ? | Issue: Unstructured module nees backing off to lower ver. 0.7.12 seems ok | https://api.github.com/repos/langchain-ai/langchain/issues/8819/comments | 1 | 2023-08-06T11:45:44Z | 2023-11-12T16:05:44Z | https://github.com/langchain-ai/langchain/issues/8819 | 1,838,174,943 | 8,819 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
How to handle such scenarios if the current asked questions is totally a new question and does not have relation with previous chat history . The standalone question would probably be non-sense, as it's semantic is twisted by the chat history.
`OPENAI_API_KEY=xxxxxx
OPENAI_API_BASE=https://xxxxxxxx.openai.azure.com/
OPENAI_API_VERSION=2023-05-15
import os
import openai
from dotenv import load_dotenv
from langchain.chat_models import AzureChatOpenAI
from langchain.embeddings import OpenAIEmbeddings
# Load environment variables (set OPENAI_API_KEY, OPENAI_API_BASE, and OPENAI_API_VERSION in .env)
load_dotenv()
# Configure OpenAI API
openai.api_type = "azure"
openai.api_base = os.getenv('OPENAI_API_BASE')
openai.api_key = os.getenv("OPENAI_API_KEY")
openai.api_version = os.getenv('OPENAI_API_VERSION')
# Initialize gpt-35-turbo and our embedding model
llm = AzureChatOpenAI(deployment_name="gpt-35-turbo")
embeddings = OpenAIEmbeddings(deployment_id="text-embedding-ada-002", chunk_size=1)
from langchain.document_loaders import DirectoryLoader
from langchain.document_loaders import TextLoader
from langchain.text_splitter import TokenTextSplitter
loader = DirectoryLoader('data/qna/', glob="*.txt", loader_cls=TextLoader, loader_kwargs={'autodetect_encoding': True})
documents = loader.load()
text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
from langchain.vectorstores import FAISS
db = FAISS.from_documents(documents=docs, embedding=embeddings)
from langchain.chains import ConversationalRetrievalChain
from langchain.prompts import PromptTemplate
# Adapt if needed
CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template("""Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:""")
qa = ConversationalRetrievalChain.from_llm(llm=llm,
retriever=db.as_retriever(),
condense_question_prompt=CONDENSE_QUESTION_PROMPT,
return_source_documents=True,
verbose=False)
`
### Suggestion:
_No response_ | Issue: Conversation retreival chain : if current question is not at all related to prevoius chat history | https://api.github.com/repos/langchain-ai/langchain/issues/8818/comments | 8 | 2023-08-06T11:42:41Z | 2024-02-19T16:09:06Z | https://github.com/langchain-ai/langchain/issues/8818 | 1,838,174,041 | 8,818 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Can't find the way to deal with the 'maximum context length' error from OpenAI. Any way to catch it within langchain?
### Suggestion:
Add a error handler? | How to catch 'maximum context length' error | https://api.github.com/repos/langchain-ai/langchain/issues/8817/comments | 2 | 2023-08-06T08:08:29Z | 2023-11-12T16:05:50Z | https://github.com/langchain-ai/langchain/issues/8817 | 1,838,102,702 | 8,817 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I've updated the code but very strange it doesn't find a good response. When I print(response["answer"]) I get that there is no text to give to the query I put. Even if it gets information from the internet and the Document on the list seems good structured. Here the code:
`
from googlesearch import search
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain.vectorstores import DocArrayInMemorySearch
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.document_loaders import (
UnstructuredWordDocumentLoader,
TextLoader,
UnstructuredPowerPointLoader,
)
from langchain.tools import Tool
from langchain.utilities import GoogleSearchAPIWrapper
from langchain.chat_models import ChatOpenAI
from langchain.docstore.document import Document
import os
import openai
import sys
from dotenv import load_dotenv, find_dotenv
sys.path.append('../..')
_ = load_dotenv(find_dotenv())
google_api_key = os.environ.get("GOOGLE_API_KEY")
google_cse_id = os.environ.get("GOOGLE_CSE_ID")
openai.api_key = os.environ['OPENAI_API_KEY']
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_ENDPOINT"] = "https://api.langchain.plus"
os.environ["LANGCHAIN_API_KEY"] = os.environ['LANGCHAIN_API_KEY']
os.environ["GOOGLE_API_KEY"] = google_api_key
os.environ["GOOGLE_CSE_ID"] = google_cse_id
folder_path_docx = "DB\\DB VARIADO\\DOCS"
folder_path_txt = "DB\\BLOG-POSTS"
folder_path_pptx_1 = "DB\\PPT JUNIO"
folder_path_pptx_2 = "DB\\DB VARIADO\\PPTX"
loaded_content = []
for file in os.listdir(folder_path_docx):
if file.endswith(".docx"):
file_path = os.path.join(folder_path_docx, file)
loader = UnstructuredWordDocumentLoader(file_path)
docx = loader.load()
loaded_content.extend(docx)
for file in os.listdir(folder_path_txt):
if file.endswith(".txt"):
file_path = os.path.join(folder_path_txt, file)
loader = TextLoader(file_path, encoding='utf-8')
text = loader.load()
loaded_content.extend(text)
for file in os.listdir(folder_path_pptx_1):
if file.endswith(".pptx"):
file_path = os.path.join(folder_path_pptx_1, file)
loader = UnstructuredPowerPointLoader(file_path)
slides_1 = loader.load()
loaded_content.extend(slides_1)
for file in os.listdir(folder_path_pptx_2):
if file.endswith(".pptx"):
file_path = os.path.join(folder_path_pptx_2, file)
loader = UnstructuredPowerPointLoader(file_path)
slides_2 = loader.load()
loaded_content.extend(slides_2)
embedding = OpenAIEmbeddings()
embeddings_content = []
for one_loaded_content in loaded_content:
embedding_content = embedding.embed_query(one_loaded_content.page_content)
embeddings_content.append(embedding_content)
db = DocArrayInMemorySearch.from_documents(loaded_content, embedding)
retriever = db.as_retriever(search_type="similarity", search_kwargs={"k": 3})
search = GoogleSearchAPIWrapper()
def custom_search(query):
max_results = 3
internet_results = search.results(query, max_results)
internet_documents = [Document(page_content=result["snippet"], metadata={
"source": result["link"]}) for result in internet_results
]
return internet_documents
chain = ConversationalRetrievalChain.from_llm(
llm=ChatOpenAI(model_name="gpt-4", temperature=0),
chain_type="map_reduce",
retriever=retriever,
return_source_documents=True,
return_generated_question=True,
)
history = []
while True:
query = input("Hola, soy Chatbot. ¿Qué te gustaría saber? ")
internet_documents = custom_search(query)
small = loaded_content[:3]
combined_results = small + internet_documents
print(combined_results)
response = chain(
{"question": query, "chat_history": history, "documents": combined_results})
print(response["answer"])
history.append(("system", query))
history.append(("assistant", response["answer"]))
`
Can anyone help me to make it work? Appreciate!
### Suggestion:
I would like that the Chatbot gives a response not just from the custom data also from what it gets from the internet. But of what I've done so far it doesn't work. | How to make a Chatbot responde based on custom data and from the internet? | https://api.github.com/repos/langchain-ai/langchain/issues/8816/comments | 2 | 2023-08-06T07:22:44Z | 2023-11-12T16:05:54Z | https://github.com/langchain-ai/langchain/issues/8816 | 1,838,088,521 | 8,816 |
[
"hwchase17",
"langchain"
]
| ### System Info
version 0.0.253
Running on a Jupyter Notebook in Google Colab
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Follow the steps in [Access Intermediate Steps](https://github.com/langchain-ai/langchain/blob/master/docs/extras/modules/agents/how_to/intermediate_steps.ipynb) within the Agent "How To".
The final step calls for
`print(json.dumps(response["intermediate_steps"], indent=2))`
This is throwing the following error:
`TypeError: Object of type AgentAction is not JSON serializable`
Based on [this issue](https://github.com/langchain-ai/langchain/issues/2222) I think it may be happening following a recent upgrade.
### Expected behavior
Printing the intermediate steps as JSON. | Object of type AgentAction is not JSON serializable | https://api.github.com/repos/langchain-ai/langchain/issues/8815/comments | 2 | 2023-08-06T07:11:26Z | 2023-08-06T09:06:42Z | https://github.com/langchain-ai/langchain/issues/8815 | 1,838,084,628 | 8,815 |
[
"hwchase17",
"langchain"
]
| I have three metadata: vehicle, color and city.
I want to retrieve chunks with filter -> vehicle = car, color = red and city = NY
All these conditions should be met in the retrieved chunks.
How can I do that?
results = db.get_relevant_documents(
query=query,
filter={"vehicle": "car", "color": "red", "city": "NY"},
)
But I am not getting desired results.
| How to pass multiple filters in db.get_relevant_documents() | https://api.github.com/repos/langchain-ai/langchain/issues/8802/comments | 2 | 2023-08-05T20:03:46Z | 2023-11-12T16:05:59Z | https://github.com/langchain-ai/langchain/issues/8802 | 1,837,920,794 | 8,802 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Bedrock can't seem to load my credentials when used within a Lambda function. My AWS credentials were set up in my local environment using environment variables. When I use the bedrock class directly, it is able to load my credentials and my code runs smoothly. Here is the RetrievalQA function that utilizes the bedrock class:
```
def qa(query):
secrets = json.loads(get_secret())
kendra_index_id = secrets['kendra_index_id']
llm = Bedrock(model_id="amazon.titan-tg1-large", region_name='us-east-1')
llm.model_kwargs = {"maxTokenCount": 4096}
retriever = AmazonKendraRetriever(index_id=kendra_index_id)
prompt_template = """
{context}
{question} If you are unable to find the relevant article, respond 'I can't generate the needed content based on the context provided.'
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"])
chain = RetrievalQA.from_chain_type(
llm=llm,
retriever=retriever,
verbose=True,
chain_type_kwargs={
"prompt": PROMPT
}
)
return chain(query)
```
The above code runs without issues when used directly. However, when used within a lambda function, it fails. The scripts file used to build my lambda function uses the bedrock class as written above, however, I run into the `validation errors` when I invoke the Lambda function.
`
{
"errorMessage": "1 validation error for Bedrock\n__root__\n Could not load credentials to authenticate with AWS client. Please check that credentials in the specified profile name are valid. (type=value_error)",
"errorType": "ValidationError",
"requestId": "b772f236-f582-4308-8af5-b5a418d4327f",
"stackTrace": [
" File \"/var/task/main.py\", line 62, in handler\n response = qa(query)\n",
" File \"/var/task/main.py\", line 32, in qa\n llm = Bedrock(model_id=\"amazon.titan-tg1-large\", region_name='us-east-1',) #client=BEDROCK_CLIENT)\n",
" File \"/var/task/langchain/load/serializable.py\", line 74, in __init__\n super().__init__(**kwargs)\n",
" File \"pydantic/main.py\", line 341, in pydantic.main.BaseModel.__init__\n raise validation_error\n"
]
`
As clearly indicated by the error message, bedrock couldn't load credentials.
### Suggestion:
I have looked at the official documentation of the [bedrock class](https://github.com/langchain-ai/langchain/blob/b786335dd10902489f87a536ee074d747b6df370/libs/langchain/langchain/llms/bedrock.py#L51) but still do not understand why my code fails. Any help will be appreciated. @3coins @jasondotparse @hwchase17 | Issue: Amazon Bedrock can't load my credentials when called from a Lambda function | https://api.github.com/repos/langchain-ai/langchain/issues/8800/comments | 22 | 2023-08-05T18:57:43Z | 2024-02-15T16:10:51Z | https://github.com/langchain-ai/langchain/issues/8800 | 1,837,898,928 | 8,800 |
[
"hwchase17",
"langchain"
]
| ### System Info
BSHTMLLoader not working for urls
````
from langchain.document_loaders import BSHTMLLoader
url = "https://www.google.com"
loader = BSHTMLLoader({"url": url})
doc = loader.load()
````
I tried this one also not working
````
from langchain.document_loaders import BSHTMLLoader
url = "https://www.google.com"
loader = BSHTMLLoader({"url": url})
doc = loader.load()
````
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.document_loaders import BSHTMLLoader
url = "https://www.example.com"
loader = BSHTMLLoader(url)
doc = loader.load()
```
### Expected behavior
It has respond html data of url. I believe BSHTMLLoader won't work with url and only work files(.html files). | BSHTMLLoader not working for urls | https://api.github.com/repos/langchain-ai/langchain/issues/8795/comments | 2 | 2023-08-05T14:41:28Z | 2023-08-06T11:42:45Z | https://github.com/langchain-ai/langchain/issues/8795 | 1,837,792,197 | 8,795 |
[
"hwchase17",
"langchain"
]
| ### System Info
Traceback:
docsearch.similarity_search(
File "/usr/local/lib/python3.9/site-packages/langchain/vectorstores/pinecone.py", line 162, in similarity_search
docs_and_scores = self.similarity_search_with_score(
File "/usr/local/lib/python3.9/site-packages/langchain/vectorstores/pinecone.py", line 136, in similarity_search_with_score
docs.append((Document(page_content=text, metadata=metadata), score))
File "/usr/local/lib/python3.9/site-packages/langchain/load/serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for Document"
page_content str type expected (type=type_error.str)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
langchain pinecone similarity_search_with_score
docs.append((Document(page_content=text, metadata=metadata), score))
seem to raise error when page_content is not an string type
while when we wrote the document to the pinecone using pdf loader
maybe the loader produce something unexpected.
it should be a bug
### Expected behavior
No erro raised | File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for Document" | https://api.github.com/repos/langchain-ai/langchain/issues/8794/comments | 6 | 2023-08-05T13:36:08Z | 2024-06-08T16:07:15Z | https://github.com/langchain-ai/langchain/issues/8794 | 1,837,765,386 | 8,794 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Add option to directly set input language for OpenAIWhisperParserLocal (in case incorrect autodetection (language="french", task = "transcribe") and optionally use translation mode of the whisper model (language="french", task = "translate")
### Motivation
I encountered situations where audio is split in chunks and therefore language autodetection was incorrect by OpenAIWhisperParserLocal
### Your contribution
I'll post PR. | Add option to directly set input language for OpenAIWhisperParserLocal | https://api.github.com/repos/langchain-ai/langchain/issues/8792/comments | 1 | 2023-08-05T11:36:13Z | 2023-11-11T16:05:21Z | https://github.com/langchain-ai/langchain/issues/8792 | 1,837,713,670 | 8,792 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello,
I want achive SQL Database Agent, by using that I want to retrive data from the database and execute queries.
So for that I have tried,
```
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import AzureOpenAI
from langchain.agents.agent_types import AgentType
db = SQLDatabase.from_uri("mysql://user:pwd@localhost/db_name")
toolkit = SQLDatabaseToolkit(db=db, llm=AzureOpenAI(deployment_name="ABCD",model_name="text-davinci-003",temperature=0))
agent_executor = create_sql_agent(
llm=AzureOpenAI(deployment_name="ABCD",model_name="text-davinci-003",temperature=0),
toolkit=toolkit,
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
)
agent_executor.run("Describe the XYZ table")
```
But into above code facing problem is that "**openai.error.InvalidRequestError: Invalid URL (POST /v1/openai/deployments/trailadvisor/completions)**". Same thing I can run using the OpenAI but getting error in AzureOpenAI.
Am i doing the correct approach, or any other way that using AzureOpenAI deployment we can achive the requirement?
### Suggestion:
_No response_ | Issue: AzureOpenAI connection with SQL Database | https://api.github.com/repos/langchain-ai/langchain/issues/8790/comments | 1 | 2023-08-05T08:30:33Z | 2023-11-11T16:05:26Z | https://github.com/langchain-ai/langchain/issues/8790 | 1,837,647,295 | 8,790 |
[
"hwchase17",
"langchain"
]
| ### System Info
Name: langchain
Version: 0.0.250
Python=3.11.4
### Who can help?
@agola @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
llm = ChatOpenAI(
model="gpt-4",
temperature=0.0,
top_p=1.0,
n=20,
max_tokens=30
)
results = llm(chat_prompt.format_prompt(
inputs_description=vars["inputs_description"],
task_description=vars["task_description"],
criteria=vars["criteria"],
criteria_description=vars["criteria_description"],
eval_steps=vars["eval_steps"],
input=vars["input"],
input_type=vars["input_type"],
output=vars["output"],
output_type=vars["output_type"]
).to_messages())
```
### Expected behavior
I was expecting to get a response with 20 results, instead of just one since I set n=20. This is the behaviour when using Openai SDK. | "n" hyperparmeter doesn't work in ChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/8789/comments | 1 | 2023-08-05T08:26:14Z | 2023-08-07T13:40:06Z | https://github.com/langchain-ai/langchain/issues/8789 | 1,837,646,215 | 8,789 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hi!
While using Llama cpp in LangChain, I am trying to load the "llama-2-70b-chat" model. The code I am using is:
n_gpu_layers = 40 # Change this value based on your model and your GPU VRAM pool.
n_batch = 512 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm = LlamaCpp(
model_path="/Users/taimoorarif/Downloads/llama-2-70b-chat.ggmlv3.q4_0.bin",
n_gpu_layers=n_gpu_layers,
n_batch=n_batch,
callback_manager=callback_manager,
verbose=True,
n_ctx=2048
)
I am getting an error which I am attaching below. The same code works perfectly fine for the 13b model. Moreover, for the 65B model, the model keeps processing the prompt and does not return anything. Any help would be appreciated. Thanks!
<img width="1018" alt="Screenshot 2023-08-05 at 3 59 05 AM" src="https://github.com/langchain-ai/langchain/assets/56273879/db0fb151-3bdb-4518-b6f7-d58dca2af261">
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
.
### Expected behavior
. | Error while loading the Llama 2 70B chat model | https://api.github.com/repos/langchain-ai/langchain/issues/8788/comments | 2 | 2023-08-05T07:59:48Z | 2023-11-15T16:07:07Z | https://github.com/langchain-ai/langchain/issues/8788 | 1,837,636,776 | 8,788 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain: 0.0.252
python: 3.10.12
@agola11
### Who can help?
@agola11 please take a look,
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a callback handler LogHandler for on_chain_start, on_chain_start, on_chat_model_start and log run_id, parent_run_id in each of them
2. Create a retrival chain and add this LogHandler
3. Add this LogHandler to llm as well
4. When running the chain, one of nested chain is not logged in between, because callbacks are not passed to that chain
### Expected behavior
All the nested chains should have callbacks defined.
| RetrievalQA.from_chain_type: callbacks are not called for all nested chains | https://api.github.com/repos/langchain-ai/langchain/issues/8786/comments | 1 | 2023-08-05T06:43:10Z | 2023-11-11T16:05:36Z | https://github.com/langchain-ai/langchain/issues/8786 | 1,837,609,787 | 8,786 |
[
"hwchase17",
"langchain"
]
| ### Feature request
if possible to provide NLTK tokenizer alternative options? such as spaCy, openNLP...
### Motivation
NLTK issues:
```
Resource averaged_perceptron_tagger not found.
Please use the NLTK Downloader to obtain the resource:
>>> import nltk
>>> nltk.download('averaged_perceptron_tagger')
For more information see: https://www.nltk.org/data.html
```
### Your contribution
https://python.langchain.com/docs/modules/data_connection/document_transformers/text_splitters/split_by_token#spacy
NLTK issues: https://stackoverflow.com/questions/49546253/attributeerror-module-nltk-has-no-attribute-download | NLTK alternatives option | https://api.github.com/repos/langchain-ai/langchain/issues/8782/comments | 2 | 2023-08-05T02:22:42Z | 2023-08-07T00:40:13Z | https://github.com/langchain-ai/langchain/issues/8782 | 1,837,527,287 | 8,782 |
[
"hwchase17",
"langchain"
]
| ### System Info
i'm using newest langchain, python 3
so i try agent but got inconsistent answer, this is the output

the true answer is "Jalan lori tidak termasuk jalan umum". i try using retrieval with prompt without agent and got true answer too. so sometimes agent has inconsistent answer. how to prevent this?
### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
i put example code below
my Agent Prompt

How To Call my Agent
```
agent_executor = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
handle_parsing_errors=True,
return_direct=True,
agent_kwargs={
'prefix':constant.PREFIX,
'format_instructions':constant.FORMAT_INSTRUCTIONS,
'suffix':constant.SUFFIX
}
)
result = agent_executor.run(question)
return result
```
### Expected behavior
got true answer same with the documents | inconsistent agent answer | https://api.github.com/repos/langchain-ai/langchain/issues/8780/comments | 4 | 2023-08-05T00:19:15Z | 2023-11-13T16:06:51Z | https://github.com/langchain-ai/langchain/issues/8780 | 1,837,465,382 | 8,780 |
[
"hwchase17",
"langchain"
]
| ### System Info
Platform: Apple M1 Pro
Python version: Python 3.9.6
Dependencies:
aiohttp==3.8.5
aiosignal==1.3.1
anyio==3.7.1
async-timeout==4.0.2
atlassian-python-api==3.40.0
attrs==23.1.0
backoff==2.2.1
beautifulsoup4==4.12.2
blobfile==2.0.2
certifi==2023.7.22
charset-normalizer==3.2.0
chroma-hnswlib==0.7.2
chromadb==0.4.5
click==8.1.6
coloredlogs==15.0.1
dataclasses-json==0.5.14
Deprecated==1.2.14
exceptiongroup==1.1.2
fastapi==0.99.1
filelock==3.12.2
flatbuffers==23.5.26
frozenlist==1.4.0
h11==0.14.0
httptools==0.6.0
humanfriendly==10.0
idna==3.4
importlib-resources==6.0.0
langchain==0.0.252
langsmith==0.0.18
lxml==4.9.3
marshmallow==3.20.1
monotonic==1.6
mpmath==1.3.0
multidict==6.0.4
mypy-extensions==1.0.0
numexpr==2.8.4
numpy==1.25.2
oauthlib==3.2.2
onnxruntime==1.15.1
openai==0.27.8
openapi-schema-pydantic==1.2.4
overrides==7.3.1
packaging==23.1
Pillow==10.0.0
posthog==3.0.1
protobuf==4.23.4
pulsar-client==3.2.0
pycryptodomex==3.18.0
pydantic==1.10.12
PyPika==0.48.9
pytesseract==0.3.10
python-dateutil==2.8.2
python-dotenv==1.0.0
PyYAML==6.0.1
regex==2023.6.3
requests==2.31.0
requests-oauthlib==1.3.1
six==1.16.0
sniffio==1.3.0
soupsieve==2.4.1
SQLAlchemy==2.0.19
starlette==0.27.0
sympy==1.12
tenacity==8.2.2
tiktoken==0.4.0
tokenizers==0.13.3
tqdm==4.65.0
typing-inspect==0.9.0
typing_extensions==4.7.1
urllib3==1.25.11
uvicorn==0.23.2
uvloop==0.17.0
watchfiles==0.19.0
websockets==11.0.3
wrapt==1.15.0
yarl==1.9.2
zipp==3.16.2
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Intro:
I have some code to load my vector from confluence:
```python
from langchain.document_loaders import ConfluenceLoader
from langchain.text_splitter import CharacterTextSplitter, TokenTextSplitter, RecursiveCharacterTextSplitter
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
import pytesseract
import os
pytesseract.pytesseract.tesseract_cmd = "/opt/homebrew/bin/tesseract"
os.environ["OPENAI_API_KEY"] = "my_openai_key"
CONFLUENCE_URL = "confluence_path"
CONFLUENCE_TOKEN = "some_token"
CONFLUENCE_SPACE = "confluence_space"
PERSIST_DIRECTORY = "./chroma_db/"
def confluence_vector(force_reload: bool = False):
# Init Openai Embeddings
embeddings = OpenAIEmbeddings(model="text-embedding-ada-002", chunk_size=1, max_retries=10)
# Check persist exists
if PERSIST_DIRECTORY and os.path.exists(PERSIST_DIRECTORY) and not force_reload:
vector = Chroma(persist_directory=PERSIST_DIRECTORY, embedding_function=embeddings)
print("Chroma loaded")
return vector
else:
loader = ConfluenceLoader(url=CONFLUENCE_URL,
token=CONFLUENCE_TOKEN)
documents = loader.load(
space_key=CONFLUENCE_SPACE, limit=10, max_pages=1000
)
text_splitter = RecursiveCharacterTextSplitter(chunk_size=100, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
#text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=10)
#texts = text_splitter.split_documents(texts)
print(texts)
vector = Chroma.from_documents(documents=texts, embedding=embeddings, persist_directory=PERSIST_DIRECTORY)
vector.persist()
print("Chroma builded")
return vector
if __name__ == '__main__':
confluence_vector(force_reload=True)
```
Then after run I have an exception:
```
Traceback (most recent call last):
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/urllib3/connection.py", line 159, in _new_conn
conn = connection.create_connection(
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/urllib3/util/connection.py", line 84, in create_connection
raise err
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/urllib3/util/connection.py", line 74, in create_connection
sock.connect(sa)
TimeoutError: [Errno 60] Operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 670, in urlopen
httplib_response = self._make_request(
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 381, in _make_request
self._validate_conn(conn)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 978, in _validate_conn
conn.connect()
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/urllib3/connection.py", line 309, in connect
conn = self._new_conn()
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/urllib3/connection.py", line 171, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x1357d2640>: Failed to establish a new connection: [Errno 60] Operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/urllib3/connectionpool.py", line 726, in urlopen
retries = retries.increment(
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/urllib3/util/retry.py", line 446, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x1357d2640>: Failed to establish a new connection: [Errno 60] Operation timed out'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/main.py", line 47, in <module>
confluence_vector(force_reload=True)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/main.py", line 40, in confluence_vector
vector = Chroma.from_documents(documents=texts, embedding=embeddings, persist_directory=PERSIST_DIRECTORY)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/langchain/vectorstores/chroma.py", line 603, in from_documents
return cls.from_texts(
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/langchain/vectorstores/chroma.py", line 567, in from_texts
chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/langchain/vectorstores/chroma.py", line 187, in add_texts
embeddings = self._embedding_function.embed_documents(texts)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 472, in embed_documents
return self._get_len_safe_embeddings(texts, engine=self.deployment)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 325, in _get_len_safe_embeddings
encoding = tiktoken.encoding_for_model(model_name)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/tiktoken/model.py", line 75, in encoding_for_model
return get_encoding(encoding_name)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/tiktoken/registry.py", line 63, in get_encoding
enc = Encoding(**constructor())
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/tiktoken_ext/openai_public.py", line 64, in cl100k_base
mergeable_ranks = load_tiktoken_bpe(
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/tiktoken/load.py", line 116, in load_tiktoken_bpe
contents = read_file_cached(tiktoken_bpe_file)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/tiktoken/load.py", line 39, in read_file_cached
return read_file(blobpath)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/tiktoken/load.py", line 24, in read_file
resp = requests.get(blobpath)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/requests/api.py", line 73, in get
return request("get", url, params=params, **kwargs)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/requests/adapters.py", line 519, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x1357d2640>: Failed to establish a new connection: [Errno 60] Operation timed out'))
```
After that first I think about - is the network connection to the cl100k_base.tiktoken file
For double check I tried to wget this file, the output was like this (ip's and domains are fake):
```
Resolving proxy (someproxy)... 10.0.0.0
Connecting to someproxy (someproxy)|10.0.0.0|:3131... connected.
Proxy request sent, awaiting response... 200 OK
Length: 1681126 (1.6M) [application/octet-stream]
Saving to: ‘cl100k_base.tiktoken’
cl100k_base.tiktoken 100%[=================================================================================================>] 1.60M --.-KB/s in 0.1s
2023-08-05 01:18:36 (14.4 MB/s) - ‘cl100k_base.tiktoken’ saved [1681126/1681126]
```
Ok then, for full transparency I tried to override .tiktoken file location in library files
```
file location: /Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/tiktoken_ext/openai_public.py
```
```python
...
def cl100k_base():
mergeable_ranks = load_tiktoken_bpe(
"/Users/pobabi1/PycharmProjects/langchainConflWithGPT/cl100k_base.tiktoken"
)
special_tokens = {
ENDOFTEXT: 100257,
FIM_PREFIX: 100258,
FIM_MIDDLE: 100259,
FIM_SUFFIX: 100260,
ENDOFPROMPT: 100276,
}
return {
"name": "cl100k_base",
"pat_str": r"""(?i:'s|'t|'re|'ve|'m|'ll|'d)|[^\r\n\p{L}\p{N}]?\p{L}+|\p{N}{1,3}| ?[^\s\p{L}\p{N}]+[\r\n]*|\s*[\r\n]+|\s+(?!\S)|\s+""",
"mergeable_ranks": mergeable_ranks,
"special_tokens": special_tokens,
}
...
```
The file was accepted by lib and then I have this kind of exception:
```
Traceback (most recent call last):
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/main.py", line 47, in <module>
confluence_vector(force_reload=True)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/main.py", line 40, in confluence_vector
vector = Chroma.from_documents(documents=texts, embedding=embeddings, persist_directory=PERSIST_DIRECTORY)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/langchain/vectorstores/chroma.py", line 603, in from_documents
return cls.from_texts(
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/langchain/vectorstores/chroma.py", line 567, in from_texts
chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/langchain/vectorstores/chroma.py", line 187, in add_texts
embeddings = self._embedding_function.embed_documents(texts)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 472, in embed_documents
return self._get_len_safe_embeddings(texts, engine=self.deployment)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 358, in _get_len_safe_embeddings
response = embed_with_retry(
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 107, in embed_with_retry
return _embed_with_retry(**kwargs)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 314, in iter
return fut.result()
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py", line 438, in result
return self.__get_result()
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py", line 390, in __get_result
raise self._exception
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 104, in _embed_with_retry
response = embeddings.client.create(**kwargs)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/openai/api_resources/embedding.py", line 33, in create
response = super().create(*args, **kwargs)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/openai/api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/openai/api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "/Users/pobabi1/PycharmProjects/langchainConflWithGPT/venv/lib/python3.9/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: Invalid URL (GET /embeddings)
```
For double-check this moment I tried to use /embeddings API from curl with default request and it fully works:
```
curl https://api.openai.com/v1/embeddings \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"input": "Your text string goes here",
"model": "text-embedding-ada-002"
}'
```
```
{
"object": "list",
"data": [
{
"object": "embedding",
"index": 0,
"embedding": [
-0.006968617,
-0.0052718227,
0.011901081,
-0.024984878,
-0.024554798,
0.039674748,
-0.010140447,
-0.009481888,
-0.013245076,
-0.010006047,
-0.011511322,
0.0077750143,
-0.014138834,
0.0077414145,
0.010147166,
-0.0050131036,
0.022942005,
-0.0016287547,
0.01498555,
-0.010281566,
0.004858544,
0.012472278,
0.004835024,
0.010819164,
-0.006615818,
-0.00035090884,
0.0055708615,
-0.012613398,
0.016369866,
0.0044587054,
0.006639338,
-0.0071164565,
-0.01516027,
-0.006625898,
-0.0185337,
0.0040622265,
0.0031785495,
-0.018963777,
0.03032054,
-0.0075263754,
0.0081177335,
0.009481888,
-0.0011029163,
-0.00044225855,
-0.008776291,
-0.028600225,
0.0028996705,
0.0092802895,
0.016463945,
0.006431019,
0.019380417,
0.014730191,
-0.025643434,
-0.009911967,
-0.003311269,
0.014837711,
-0.03690612,
0.014058193,
0.013446676,
0.0043948656,
-0.004841744,
0.0032171893,
-0.011040923,
0.002407432,
-0.014138834,
-0.018587459,
-0.024702638,
-0.0035011084,
0.01803642,
0.008480612,
0.034083728,
0.01518715,
0.014071633,
-0.006672938,
0.022041528,
0.022122167,
0.005359182,
0.020885691,
0.008312613,
-0.011948121,
0.0076540546,
-0.025065517,
0.010684765,
0.016786505,
0.009475169,
-0.000614458,
-0.0028929505,
0.028949665,
-0.03292789,
-0.03045494,
0.0076607745,
0.020200253,
0.009972447,
0.018063301,
-0.005668301,
0.008285733,
-0.010281566,
0.020361533,
0.022283446,
-0.034621324,
0.005426382,
0.013318996,
-0.017324103,
-0.008137893,
-0.029782942,
0.017713862,
0.021503929,
-0.016759625,
0.016316107,
0.0038203073,
-0.029137824,
0.012143,
-0.01815738,
-0.017337542,
0.0030122302,
0.016894024,
0.010778844,
-0.014031313,
-0.021436729,
-0.03026678,
0.009959007,
0.012687318,
0.03018614,
-0.017754182,
0.026436392,
0.019568576,
-0.0058598206,
-0.031315096,
0.0067770975,
-0.0019471135,
0.037443716,
0.024447279,
0.010765404,
0.014864591,
-0.013157717,
0.022821045,
-0.03604596,
0.0025317515,
-0.027417509,
-0.012808277,
0.012895637,
0.026960552,
0.0029114303,
-0.0031382297,
0.012613398,
0.011087963,
-0.0022898323,
-0.0105167655,
0.021490488,
-0.013124117,
-0.002716551,
0.0041126264,
0.0049727834,
-0.01495867,
-0.010563805,
0.017203143,
0.0052919826,
0.008023653,
0.0019823934,
-0.0299711,
0.011007324,
0.009300449,
0.016894024,
-0.014649551,
0.033949327,
0.05112559,
-0.0037195077,
0.0053491024,
-0.0059236605,
-0.0082790125,
-0.029567903,
0.028546466,
-0.06413547,
0.0126066785,
-0.041368183,
0.0038471874,
0.026181033,
0.013345876,
-0.02076473,
-0.018990656,
-0.03994355,
0.0023335123,
0.0050534233,
0.025186477,
-0.02720247,
-0.013964114,
0.000013433393,
0.0040723067,
0.0007723775,
0.00298199,
0.016719304,
0.022095287,
0.009246689,
-0.0080438135,
-0.68941593,
-0.0038740672,
0.020710971,
-0.026328873,
0.025670316,
0.03843827,
0.00089207705,
0.018560579,
-0.025589675,
0.020670652,
-0.019125057,
-0.0024998318,
-0.010940124,
-0.01227068,
0.0028946304,
-0.012673878,
0.010335326,
-0.0065788585,
-0.014770512,
0.008265573,
0.005171023,
0.032444052,
0.0016900745,
0.014273233,
0.019044418,
-0.00006415479,
0.009905247,
-0.018775618,
-0.001812714,
0.031906456,
-0.029460382,
-0.003275989,
-0.0025703914,
0.010610845,
0.061017398,
-0.020549692,
-0.009488609,
-0.0049492638,
-0.005453262,
0.0018479939,
-0.0181977,
-0.026127273,
0.02106041,
-0.0044519855,
0.005832941,
-0.0040555065,
0.0049459036,
-0.02114105,
0.005251663,
0.005681741,
0.0018849538,
-0.0018983937,
0.010866204,
-0.0013549156,
0.014757072,
0.009730528,
0.043303538,
-0.016544586,
0.0012490759,
0.015523149,
-0.0038875071,
0.022135606,
-0.029057184,
-0.013661715,
0.0028324707,
0.009488609,
-0.0120086,
0.006595658,
0.009710368,
-0.005785901,
0.015778508,
0.013896914,
-0.020321213,
-0.0061185397,
0.002054633,
0.026637992,
0.018654658,
-0.009488609,
-0.020603452,
0.0181977,
0.006676298,
-0.0036052682,
-0.023130164,
-0.016598346,
0.025388077,
-0.00601102,
-0.022229686,
0.009495328,
0.0067636576,
0.0046535847,
0.013083797,
0.010852764,
-0.008171493,
-0.0014019554,
0.0041697463,
0.005869901,
-0.00004504485,
0.014259793,
-0.007801894,
-0.0038471874,
-0.014824271,
0.015818827,
-0.028358307,
0.005382702,
0.0077548544,
-0.0015187149,
-0.0032373492,
0.019219136,
0.03311605,
-0.02376184,
-0.0061689396,
-0.008668771,
-0.0028677506,
-0.0133861955,
-0.0066998177,
-0.023896242,
0.010180767,
0.005517102,
0.023721522,
-0.00026375914,
0.020952892,
-0.002729991,
-0.00298199,
0.0012877157,
0.026785832,
0.008144613,
0.0076271747,
-0.02134265,
-0.022296887,
-0.003964787,
0.007519655,
-0.000922317,
0.05429742,
-0.016988104,
0.025199916,
-0.002464552,
0.01847994,
-0.00085049716,
0.016020427,
-0.005443182,
-0.019985214,
0.015254349,
-0.017579462,
0.006558698,
0.010557085,
-0.0183993,
-0.018762179,
-0.007163496,
0.009468448,
-0.010375646,
0.0019807136,
-0.012552919,
-0.024581678,
0.026691752,
0.012237079,
-0.01790202,
0.0047879843,
-0.015791947,
-0.0076540546,
-0.02147705,
-0.0018748738,
0.011397082,
-0.011047644,
0.016208587,
-0.010120287,
-0.004559505,
-0.037820034,
0.013406356,
-0.016517706,
-0.026476713,
-0.0031802296,
-0.02084537,
-0.005769101,
-0.00022112927,
-0.013977554,
-0.0043545454,
-0.019958334,
-0.0020227134,
0.022995764,
-0.016369866,
0.008655331,
0.013755795,
-0.002741751,
0.015281229,
0.016463945,
0.014273233,
-0.0018496739,
0.015791947,
0.0077078147,
-0.01798266,
0.0037430276,
-0.013466836,
-0.013493716,
0.0005195383,
-0.011491162,
0.00039983867,
-0.007976614,
0.03026678,
0.017377863,
0.008756131,
0.03362677,
0.0062697395,
0.042873457,
-0.025670316,
-0.0045796647,
-0.02162489,
0.002414152,
-0.022270007,
0.01803642,
0.012633558,
0.003911027,
-0.02126201,
-0.01212956,
0.010100126,
0.015308109,
0.025871914,
-0.007297896,
0.007781734,
-0.0019504735,
0.008198373,
0.004942544,
-0.006723338,
0.0041059065,
-0.0124588385,
-0.013379476,
0.0057220613,
0.026651433,
0.03580404,
0.0056649414,
-0.02370808,
-0.0031751895,
-0.003944627,
0.0008546972,
0.010933404,
0.013829715,
0.017686982,
0.029137824,
0.0008861971,
0.029003425,
-0.0035246282,
-0.010745244,
0.023103284,
0.018775618,
-0.007371816,
0.0244204,
0.004495665,
0.020186814,
0.0014783951,
-0.019608894,
0.009972447,
-0.011141723,
0.012700758,
-0.031691417,
-0.006326859,
0.021436729,
-0.013292116,
0.00009911967,
0.013345876,
0.026140714,
0.018842818,
0.015751628,
-0.019017538,
-0.0044654254,
-0.00064973783,
0.018977217,
-0.009932127,
-0.014111954,
-0.0045931046,
-0.016907465,
0.014232913,
-0.0072306963,
-0.0070425365,
0.035132043,
-0.008252133,
0.02427256,
-0.00027656907,
0.009528928,
-0.0009357569,
-0.016894024,
0.004310866,
0.0014599152,
-0.044298094,
0.0073180557,
0.007244136,
-0.002093273,
-0.019380417,
-0.03368053,
0.037363075,
0.003934547,
0.0009962367,
0.00033767888,
0.00900477,
0.02076473,
0.002152073,
0.00068081776,
0.023425842,
0.013359316,
-0.028707745,
-0.0020411932,
0.0015716348,
0.008238693,
0.017337542,
-0.0038606273,
-0.027793828,
0.030831259,
-0.0071030166,
0.0055003017,
0.021893688,
0.006696458,
-0.009999327,
0.0059942203,
0.01847994,
-0.0038505474,
-0.02390968,
0.02411128,
0.0071231765,
-0.019461056,
-0.004542705,
0.034701966,
0.013775954,
-0.0035582283,
-0.029164704,
-0.019313216,
0.0055137416,
0.062576436,
0.021356089,
-0.0045796647,
-0.0040958263,
0.0072306963,
0.005372622,
-0.010745244,
-0.03327733,
0.00891069,
-0.021651769,
-0.0017421542,
0.0044217454,
0.015402189,
0.0068778973,
0.015845709,
-0.0034070287,
0.010725085,
-0.0068039773,
-0.0013280356,
-0.023533363,
-0.009676768,
0.027471269,
0.012741078,
0.0046300646,
0.01770042,
0.0001832244,
0.0046502245,
0.010718364,
0.01212956,
-0.05150191,
-0.020173373,
0.031449497,
0.008050533,
0.009952287,
-0.005120623,
0.029110944,
-0.005459982,
-0.01494523,
0.018748738,
-0.0020025533,
0.021503929,
0.019340096,
0.017082183,
-0.015657548,
0.009535649,
-0.0032591892,
0.0020378332,
-0.00071609765,
-0.013883474,
-0.0036825477,
0.0019991933,
-0.008077413,
-0.0074524553,
-0.023466162,
0.008198373,
0.009864927,
-0.009817887,
0.0062260595,
-0.018883137,
-0.026530473,
0.0077414145,
0.008292452,
-0.0065788585,
-0.0037833476,
-0.018291779,
-0.013164436,
-0.00029378902,
0.0037564675,
-0.02099321,
-0.00601438,
0.0024964719,
0.008144613,
0.0014137153,
0.013540755,
0.013446676,
0.0040219068,
0.005144143,
-0.01537531,
-0.023291443,
-0.010731804,
-0.0013406356,
-0.0359922,
0.0070022168,
-0.010167327,
-0.023587123,
0.016611785,
0.005419662,
0.011551642,
0.0052180625,
0.011948121,
0.02411128,
0.003301189,
0.006320139,
-0.016974663,
0.0030945498,
-0.026785832,
0.0019571935,
-0.034594446,
-0.010590685,
-0.019165376,
-0.0074121356,
-0.003329749,
0.00598414,
-0.006545258,
0.012431959,
0.0025435116,
0.008635172,
0.031126937,
-0.014461393,
0.019420736,
-0.00037064878,
0.00056699815,
-0.010342046,
0.00044351854,
-0.008265573,
0.01186748,
0.008796451,
-0.000047118596,
0.008615011,
-0.022753844,
0.025482155,
-0.059243325,
0.01225724,
0.029541023,
0.000908877,
0.014447953,
-0.011054363,
-0.012882197,
0.0031382297,
-0.0062260595,
-0.009542368,
0.005742221,
-0.01830522,
-0.014542032,
-0.029648542,
-0.0077481344,
-0.016074186,
0.0037766276,
-0.018923458,
-0.0077682943,
-0.023869362,
0.0023150323,
0.008749411,
-0.000101219666,
-0.022310326,
-0.024702638,
0.0037799876,
0.01814394,
-0.0067938976,
0.034379408,
0.00091475697,
0.0113702025,
-0.013910354,
-0.018291779,
0.002113433,
-0.02720247,
-0.014085073,
-0.019756734,
0.009858208,
0.02417848,
0.028600225,
-0.0043007857,
0.012801558,
0.004260466,
-0.015953228,
0.0025031918,
0.0025939115,
-0.011733081,
-0.018264899,
0.011155163,
0.005372622,
0.023654321,
0.01537531,
-0.009582688,
0.004166386,
-0.01497211,
0.012828438,
-0.016692424,
-0.013789395,
-0.024796719,
0.0028996705,
0.008359652,
-0.009320609,
0.01220348,
-0.016652105,
0.006746858,
0.046206567,
0.010570525,
0.008675491,
0.026960552,
0.00031541896,
-0.0179961,
0.012714198,
-0.0004481385,
0.015496269,
-0.01838586,
-0.015079631,
-0.012378199,
0.0017589541,
0.030938778,
0.016208587,
-0.0044116653,
-0.013023317,
0.0073650954,
0.001216316,
-0.017592901,
0.0060043,
-0.0029651902,
0.013500435,
-0.0049559837,
-0.031960215,
0.0019739934,
-0.008373092,
-0.024742959,
-0.008366372,
0.016060747,
-0.013520596,
0.010886364,
-0.024124721,
-0.0068174177,
-0.002118473,
0.0005031583,
0.029836701,
0.010711645,
0.0010793965,
0.0085881315,
-0.009938847,
-0.005480142,
0.007580135,
-0.00899133,
-0.01198172,
0.035105165,
0.016988104,
0.00296351,
-0.0017203143,
0.015657548,
0.022350647,
-0.016074186,
-0.019904574,
0.015697869,
0.013654995,
0.0047006244,
-0.03935219,
-0.016302666,
-0.050480474,
0.023385523,
0.016477386,
-0.020254012,
-0.027874468,
0.0005447382,
-0.040669307,
0.025320876,
-0.01847994,
-0.0031197497,
0.013124117,
-0.002430952,
-0.031207576,
0.0066998177,
-0.0022360727,
0.00016138447,
0.006431019,
0.034029968,
-0.013379476,
0.014918351,
0.0061386996,
0.019622335,
-0.006306699,
-0.009663328,
0.003339829,
0.019918013,
-0.037443716,
-0.008843491,
-0.0043444657,
-0.0125327585,
0.00902493,
-0.0062764594,
-0.017256903,
-0.003581748,
-0.015482829,
-0.011464282,
0.011901081,
-0.003348229,
-0.021745848,
-0.0117532415,
0.0043747057,
-0.022431286,
-0.013130836,
-0.011417243,
0.013372756,
-0.026530473,
-0.024917677,
-0.0047107046,
0.010812445,
0.0052550226,
-0.0008769571,
-0.0065990184,
0.019501375,
0.037981313,
-0.0070290966,
0.015294669,
0.0018748738,
0.0067569376,
-0.03346549,
0.005826221,
0.00024716917,
0.00059765804,
0.028600225,
-0.018560579,
-0.026409512,
0.008890531,
-0.00300383,
0.019971775,
0.013198037,
-0.009031651,
-0.008608292,
0.02136953,
-0.0013473555,
-0.0085881315,
-0.02416504,
0.021665208,
0.008655331,
-0.005372622,
0.00610846,
-0.016463945,
0.0014867951,
-0.0100598065,
-0.0034507087,
0.009481888,
-0.02454136,
-0.034029968,
-0.0014506752,
0.001232276,
0.004425105,
-0.026154153,
-0.019098178,
-0.015966667,
-0.0013641554,
-0.0070358166,
-0.005863181,
0.0013145957,
-0.00905181,
0.005140783,
-0.011652442,
0.028707745,
-0.015818827,
-0.01845306,
-0.021544248,
0.00010090467,
-0.02709495,
-0.025509035,
-0.009145889,
0.025509035,
0.015066191,
-0.0090450905,
-0.012048921,
-0.013043477,
-0.028223908,
-0.011007324,
-0.004052147,
0.001846314,
0.020697532,
0.0087359715,
-0.003338149,
0.03290101,
0.00040193868,
0.027713189,
-0.0068778973,
0.00045065852,
-0.0072172564,
-0.007996773,
0.0037497475,
0.008628451,
0.0007370976,
-0.014461393,
0.038626432,
0.014609232,
0.0051475028,
0.017189704,
0.0026073514,
-0.007882534,
-0.0133861955,
-0.0028862304,
-0.0062294193,
0.010469725,
0.0049828636,
-0.009381089,
-0.013332436,
0.0031903095,
-0.006928297,
-0.00013670955,
0.014488272,
0.0093542095,
0.0126066785,
0.0098985275,
-0.010617565,
0.0018496739,
-0.016853705,
-0.015751628,
0.025414957,
-0.000895437,
-0.025173036,
0.020240573,
0.0008412572,
-0.0043982253,
-0.023318322,
0.03040118,
-0.018977217,
-0.013164436,
-0.007600295,
-0.017028423,
-0.033546127,
0.0012549559,
-0.007559975,
-0.010221086,
-0.0034507087,
-0.0052550226,
-0.025535915,
-0.025965994,
-0.0038807872,
0.005402862,
-0.031879574,
-0.04518513,
-0.013648275,
0.027740069,
-0.010006047,
-0.009293729,
-0.0068946974,
-0.012888918,
0.012304279,
0.013722194,
-0.0005459982,
-0.01215644,
0.031073177,
0.016087627,
-0.011101403,
0.02710839,
0.23052211,
-0.01537531,
0.015402189,
0.03959411,
0.0055775815,
0.026584232,
0.0029047104,
0.0020294334,
-0.014716751,
0.003900947,
-0.008809891,
-0.0017203143,
0.019017538,
-0.00605134,
0.025535915,
0.007889254,
-0.019192256,
-0.014058193,
-0.015738187,
-0.029514143,
0.008695651,
0.002162153,
-0.0018530339,
-0.021813048,
0.0061487798,
-0.0058732606,
-0.0135340355,
-0.0122975595,
0.027444389,
0.018950338,
-0.0070358166,
-0.00921309,
0.021517368,
0.0049862233,
-0.017431622,
0.011565082,
0.0089711705,
-0.0044284654,
0.028519586,
0.02710839,
0.011195483,
-0.0004489785,
-0.012828438,
-0.019595455,
-0.008467172,
0.017431622,
-0.0023200724,
-0.008715811,
-0.01200188,
-0.0043377457,
0.008776291,
0.0042134263,
0.0112962825,
0.012774678,
0.012747798,
-0.002694711,
0.024890797,
-0.011249243,
-0.031879574,
0.024326319,
-0.001216316,
0.016517706,
-0.004764464,
0.009938847,
-0.014165713,
-0.006407499,
0.007271016,
-0.0047174245,
0.0052819024,
0.0011071163,
0.0009676768,
-0.016880585,
-0.002743431,
-0.007506215,
-0.030858139,
-0.010489886,
-0.002110073,
0.019797055,
0.031825814,
0.018412739,
-0.027551908,
-0.0128418775,
0.010483165,
-0.025455276,
-0.025186477,
-0.029809821,
0.0017404743,
-0.013668435,
-0.010819164,
-0.0084268525,
-0.011121564,
-0.016396746,
-0.009569248,
-0.0060815797,
0.002141993,
-0.008023653,
-0.017230023,
0.028815264,
0.007291176,
0.0073247757,
-0.029110944,
0.020388413,
0.01202204,
0.013601235,
-0.0127679575,
-0.010819164,
-0.012431959,
0.011289563,
0.006935017,
-0.01503931,
-0.0034103887,
0.011047644,
0.012956117,
0.004818224,
-0.013668435,
0.0062260595,
0.01538875,
-0.013896914,
0.0038303873,
-0.00095171685,
-0.009643168,
-0.019528255,
-0.00607822,
-0.00082445727,
-0.0070156567,
-0.0129157975,
-0.013520596,
-0.018842818,
0.012888918,
-0.024447279,
0.0050399834,
0.010785565,
0.010725085,
-0.0030777499,
-0.007976614,
0.005160943,
0.0025283915,
-0.00900477,
-0.011887641,
0.03316981,
-0.0012826758,
-0.008494052,
0.0068274974,
0.0001803894,
0.011820441,
-0.034406286,
-0.00299879,
-0.010120287,
-0.0077682943,
-0.01835898,
-0.028600225,
-0.013258516,
-0.015899468,
-0.00076733745,
0.020509372,
-0.016450506,
-0.031368855,
-0.025912235,
0.013453395,
0.00012536958,
-0.02116793,
0.0050265435,
0.029299103,
-0.014609232,
0.004505745,
-0.0037732676,
-0.1762247,
0.001505275,
0.007055977,
-0.011309722,
0.019004097,
-0.013271956,
0.00303239,
0.0011171963,
-0.002758551,
0.015926348,
-0.00037295878,
-0.010839324,
-0.04319602,
-0.012868757,
-0.016423626,
0.016786505,
-0.027874468,
0.0041563064,
0.011618841,
0.030696858,
0.017660102,
-0.0050769434,
0.020052414,
-0.020227132,
0.0014582352,
0.001517875,
0.009777567,
0.031987093,
0.005120623,
-0.0091324495,
-0.0063402993,
-0.0104428455,
-0.007855654,
0.005036623,
0.027525028,
-0.011665882,
0.0047241445,
-0.026987432,
-0.014918351,
0.025186477,
0.033841807,
0.032175254,
0.0028963105,
-0.013036757,
0.004179826,
0.016786505,
0.018439619,
-0.010180767,
0.011659161,
-0.005806061,
0.01220348,
-0.043222897,
0.0015220749,
-0.017216584,
0.00305255,
0.023869362,
-0.011544921,
-0.012317719,
0.009549089,
0.02416504,
0.0037060678,
-0.021033531,
0.016450506,
0.010590685,
0.013251796,
-0.017203143,
-0.009643168,
0.009777567,
-0.014689871,
0.018600898,
-0.024286,
-0.017942341,
0.0035044684,
-0.011524762,
0.012814998,
-0.010637725,
-0.022740405,
0.026637992,
-0.0027652709,
-0.02122169,
-0.021517368,
0.03327733,
-0.01509307,
0.01782138,
0.012190039,
0.022054967,
-0.004499025,
0.0037396676,
0.002740071,
0.0006404979,
0.03255157,
-0.0061219,
-0.010261406,
-0.014165713,
0.022162486,
0.016396746,
0.015482829,
0.002180633,
-0.00035258883,
-0.00059387804,
-0.013816275,
-0.034029968,
-0.011907801,
0.0034910284,
0.019313216,
0.008413413,
0.011269403,
0.00068081776,
0.0043444657,
-0.019676095,
-0.031180697,
0.017404743,
0.019797055,
0.029030304,
-0.00061739795,
0.010960284,
-0.026019754,
-0.021584569,
0.00900477,
0.016988104,
0.037604995,
-0.005785901,
-0.005510382,
0.014501712,
-0.00015875947,
-0.012143,
-0.06531818,
-0.021880249,
0.010967004,
0.028761504,
0.003598548,
0.017001543,
-0.00066359784,
0.005869901,
-0.000609838,
-0.0028744705,
-0.0040319865,
-0.029433502,
-0.025616555,
-0.013480276,
0.013708754,
-0.008803171,
0.0024914318,
-0.002057993,
-0.000905517,
0.022646325,
-0.014810831,
-0.019743295,
0.0018513539,
-0.0181305,
-0.009461729,
-0.0002122043,
-0.034460045,
0.035239562,
0.016961224,
-0.010967004,
-0.0076204548,
-0.0044452655,
0.02682615,
-0.019205697,
-0.022081846,
0.0070290966,
-0.044486254,
-0.004519185,
0.021786168,
-0.02718903,
-0.0071500563,
0.008346212,
-0.001800954,
-0.023143604,
-0.015832268,
-0.0037900675,
-0.007922854,
0.041153144,
0.0058766208,
-0.020428732,
-0.030858139,
-0.009938847,
-0.029110944,
-0.020737851,
0.023278004,
0.0028610306,
0.026248233,
0.0019420736,
-0.02443384,
-0.00037757875,
0.0071231765,
-0.016504265,
-0.005379342,
0.017310662,
0.012680599,
0.009502049,
-0.011807001,
0.0048887837,
0.011766681,
0.0053054225,
-0.008944291,
0.008245413,
-0.014206033,
0.025589675,
-0.01782138,
-0.0058967806,
-0.023022644,
-0.030508699,
0.023224244,
0.0014254752,
-0.017095624,
-0.02093945,
-0.0013943954,
-0.019474495,
0.005782541,
0.019487936,
-0.0054062223,
0.008191653,
0.0035716682,
-0.026396073,
0.014138834,
0.044190574,
0.0065284586,
-0.02084537,
-0.008063973,
-0.0051844628,
0.010328606,
0.0070223766,
0.011571802,
-0.0022192728,
-0.028976545,
-0.023479603,
-0.06392043,
0.025845034,
0.008373092,
-0.008500772,
-0.008225253,
-0.008883811,
-0.010967004,
-0.0053356625,
-0.014246353,
0.005728781,
-0.00029336903,
-0.005345742,
0.008957731,
-0.0034708686,
-0.018090181,
0.0064746984,
0.021436729,
-0.008675491,
0.024729518,
0.0047443043,
-0.010355486,
-0.0105167655,
0.00013838954,
-0.0009374369,
-0.0061655794,
0.015254349,
-0.03916403,
0.018990656,
-0.010832604,
-0.01513339,
0.010812445,
-0.03282037,
-0.0038807872,
0.01498555,
-0.0052819024,
-0.0031365496,
-0.0077548544,
0.044755053,
0.0044486253,
-0.0012238759,
-0.00302231,
-0.022659766,
-0.003625428,
-0.019259457,
0.00613534,
0.020401852,
-0.019850815,
-0.029702302,
0.02404408,
0.0032776692,
0.0061319796,
0.012290839,
-0.029756062,
-0.026664872,
-0.016006988,
-0.028519586,
0.02440696,
-0.01538875,
0.001528795,
-0.017203143,
0.04540017,
-0.008433572,
-0.010550365,
-0.023775281,
0.012418519,
-0.00046955844,
-0.017068744,
0.013856594,
0.0010306766,
-0.011686041,
-0.013426515,
-0.0025871915,
-0.004015187,
0.03623412,
0.015818827,
0.016719304,
-0.014447953,
0.013641555,
-0.008386532,
0.021920567,
0.009979167,
0.005137423,
-0.014138834,
0.0029534302,
0.03029366,
0.022323767,
-0.009582688,
-0.00093995687,
-0.008010213,
0.00302399,
-0.019555135,
0.013164436,
-0.004851824,
0.00014426952,
0.010503326,
0.006646058,
0.001794234,
0.0058497405,
0.014878031,
0.022659766,
0.004525905,
-0.00895101,
0.0016077547,
-0.0062193396,
-0.004858544,
-0.0001809144,
-0.037282437,
-0.031100057,
0.012626838,
-0.0021907128,
-0.011027483,
-0.008272293,
0.034943886,
0.017149383,
-0.021853367,
0.023802161,
0.0018026341,
-0.009811168,
-0.025764395,
0.026839592,
-0.001186916,
0.0073314956,
0.026920231,
-0.013144276,
0.014192593,
0.0034103887,
0.008648612,
-0.024689198,
0.0076271747,
0.002691351,
-0.009468448,
0.0027988707,
-0.0051038233,
-0.019178817,
0.0005728781,
0.008084133,
-0.010106847,
0.035669643,
-0.006867817,
0.082145005,
-0.0020025533,
-0.0023049524,
0.0088972505,
-0.0040891063,
0.023116723,
0.00891069,
0.020885691,
-0.014031313,
0.0028324707,
0.011538202,
0.0031365496,
-0.008084133,
-0.0084268525,
-0.016544586,
-0.012633558,
-0.0057724607,
0.016208587,
-0.016853705,
0.010012767,
0.018587459,
0.012720918,
0.02102009,
0.01518715,
-0.012492439,
-0.023062963,
0.023062963,
-0.014542032,
-0.014595792,
-0.029863581,
-0.011726362,
-0.0015632348,
-0.026691752,
-0.008097573,
0.010711645,
0.010772124,
-0.005093743,
-0.03287413,
-0.003890867,
0.014676431,
0.0032255894,
0.026113834,
-0.0101337265,
-0.003301189,
-0.00038639872,
0.010967004,
-0.017014984,
-0.00007502225,
-0.0240844
]
}
],
"model": "text-embedding-ada-002-v2",
"usage": {
"prompt_tokens": 5,
"total_tokens": 5
}
}
```
Please. help me out, I'm totally frustrated, because the embedding method was /POST type, but after overriding library calls /GET embeddings
### Expected behavior
A few weeks ago this code works correct, and I don't mind where the problem can be | OpenAIEmbeddings error trying to load chroma from Confluence | https://api.github.com/repos/langchain-ai/langchain/issues/8777/comments | 1 | 2023-08-04T22:33:52Z | 2023-08-06T09:36:55Z | https://github.com/langchain-ai/langchain/issues/8777 | 1,837,409,860 | 8,777 |
[
"hwchase17",
"langchain"
]
| ### System Info
Latest versions of langchain at the current time.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Literally just pip installed lang-chain and getting the error: ImportError: cannot import name 'dataclass_transform' from 'typing_extensions'. I tried upgrading pydantic and typing_extensions which only make the problem worse.
### Expected behavior
No import errors. | Just install Langchain and getting Import error from typing extensions | https://api.github.com/repos/langchain-ai/langchain/issues/8775/comments | 3 | 2023-08-04T21:09:37Z | 2023-11-10T16:05:47Z | https://github.com/langchain-ai/langchain/issues/8775 | 1,837,350,467 | 8,775 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am used langchain to get a text embedding with the next code:
```
from langchain.embeddings import LlamaCppEmbeddings
llama = LlamaCppEmbeddings(model_path="/alloc/data/fury_mld-nlp-search/ggml-model-q4_0.bin",n_gpu_layers=None)
embedding= llama.embed_documents(["Thi is a text"])
```
The problem is the prediction takes a long time. For this reason, I try to to use a GPU ''Tesla V100-SXM2-16GB''. When i use the GPU layer, example
```
from langchain.embeddings import LlamaCppEmbeddings
llama = LlamaCppEmbeddings(model_path="/alloc/data/fury_mld-nlp-search/ggml-model-q4_0.bin",n_gpu_layers=1)
```
I get the follows:
```
Exception ignored in: <function Llama.__del__ at 0x7f29f59cab80>
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/llama_cpp/llama.py", line 978, in __del__
if self.ctx is not None:
AttributeError: 'Llama' object has no attribute 'ctx'
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Input In [9], in <cell line: 1>()
----> 1 llama = LlamaCppEmbeddings(model_path="/alloc/data/fury_mld-nlp-search/ggml-model-q4_0.bin",n_gpu_layers=1)
File /usr/local/lib/python3.9/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for LlamaCppEmbeddings
__root__
Could not load Llama model from path: /alloc/data/fury_mld-nlp-search/ggml-model-q4_0.bin. Received error __init__() got an unexpected keyword argument 'n_gpu_layers' (type=value_error)
```
I am using the versions llama-cpp-python==0.1.48 y langachain=0.0.252
Some Idea?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.embeddings import LlamaCppEmbeddings
llama = LlamaCppEmbeddings(model_path="/alloc/data/fury_mld-nlp-search/ggml-model-q4_0.bin",n_gpu_layers=1)
```
### Expected behavior
```
Exception ignored in: <function Llama.__del__ at 0x7f29f59cab80>
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/llama_cpp/llama.py", line 978, in __del__
if self.ctx is not None:
AttributeError: 'Llama' object has no attribute 'ctx'
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Input In [9], in <cell line: 1>()
----> 1 llama = LlamaCppEmbeddings(model_path="/alloc/data/fury_mld-nlp-search/ggml-model-q4_0.bin",n_gpu_layers=1)
File /usr/local/lib/python3.9/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for LlamaCppEmbeddings
__root__
Could not load Llama model from path: /alloc/data/fury_mld-nlp-search/ggml-model-q4_0.bin. Received error __init__() got an unexpected keyword argument 'n_gpu_layers' (type=value_error)
``` | LlamaCppEmbeddings does not work with n_gpu_layers | https://api.github.com/repos/langchain-ai/langchain/issues/8766/comments | 2 | 2023-08-04T16:52:30Z | 2023-11-10T16:05:52Z | https://github.com/langchain-ai/langchain/issues/8766 | 1,837,073,191 | 8,766 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hello,
I am trying to use PlanAndExecute Agent... and getting error langchain.tools.base.ToolException: Too many arguments to single-input.
I am using SQLDatabaseToolkit & a custom tool .
Let me know if i need to modify in the below code...
```
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
tools = [
*toolkit.get_tools(),
send_email_tool
]
planner = load_chat_planner(llm)
executor = load_agent_executor(llm, tools, verbose=True)
agent = PlanAndExecute(planner=planner, executor=executor, verbose=True)
agent.run(
"Find total number of orders & total number of customers and send an email"
)
```
Custom tool..
```
def send_email(inputs: str):
print('sending Email.....')
return 'Sent email....'
send_email_tool = Tool(
name='Send Email',
func=send_email,
description="Useful for sending emails"
)
```
The agent runs... in the last step it is failing with the below Exception...
```
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\rraghuna\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\tools\base.py", line 476, in _to_args_and_kwargs
raise ToolException(
langchain.tools.base.ToolException: Too many arguments to single-input tool Send Email. Args: ['Database Summary', 'Here is a summary of the data:\n\nTotal number of orders: 16\nTotal number of customers: 3']
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps listed above
### Expected behavior
Expected behaviour is it should work fine | PlanAndExecute Agent Error: langchain.tools.base.ToolException: Too many arguments to single-input | https://api.github.com/repos/langchain-ai/langchain/issues/8764/comments | 1 | 2023-08-04T16:10:24Z | 2023-11-10T16:05:56Z | https://github.com/langchain-ai/langchain/issues/8764 | 1,837,018,787 | 8,764 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.251
Python 3.10.9
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have retirevers info like below
[{'name': 'Information of requirements from REQ1 to REQ1.1.2',
'description': "Good for gathering requirements statements for the REQ's present in this list ['REQ1', 'REQ1.1', 'REQ1.1.1', 'REQ1.1.2'].",
'retriever': VectorStoreRetriever(tags=['FAISS'], metadata=None, vectorstore=<langchain.vectorstores.faiss.FAISS object at 0x0000017A8FFAF2E0>, search_type='similarity', search_kwargs={'k': 4, 'max_source': 4})},
{'name': 'Information of requirements from REQ2 to REQ3.1.1.1',
'description': "Good for gathering requirements statements for the REQ's present in this list ['REQ2', 'REQ2.1', 'REQ2.2', 'REQ2.2.1', 'REQ2.2.1.1', 'REQ2.3', 'REQ2.4', 'REQ2.4.1', 'REQ2.4.2', 'REQ2.4.3', 'REQ2.5', 'REQ2.5.1', 'REQ2.5.2', 'REQ2.5.2.1', 'REQ2.5.2.2', 'REQ2.5.2.3', 'REQ2.5.3', 'REQ2.6', 'REQ2.6.1', 'REQ2.6.2', 'REQ2.6.2.1', 'REQ2.6.2.2', 'REQ3', 'REQ3.1', 'REQ3.1.1', 'REQ3.1.1.1'].",
'retriever': VectorStoreRetriever(tags=['FAISS'], metadata=None, vectorstore=<langchain.vectorstores.faiss.FAISS object at 0x0000017A8FFAF7C0>, search_type='similarity', search_kwargs={'k': 4, 'max_source': 4})},
{'name': 'Information of requirements from REQ3.1.2 to REQ5.1.2',
'description': "Good for gathering requirements statements for the REQ's present in this list ['REQ3.1.2', 'REQ3.2', 'REQ3.2.1', 'REQ3.2.1.1', 'REQ3.2.1.2', 'REQ3.2.1.3', 'REQ3.2.2', 'REQ3.2.2.1', 'REQ3.2.2.2', 'REQ3.2.2.3', 'REQ3.2.2.4', 'REQ3.2.3', 'REQ3.3', 'REQ3.3.1', 'REQ3.3.2', 'REQ4', 'REQ4.1', 'REQ4.2', 'REQ5', 'REQ5.1', 'REQ5.1.1', 'REQ5.1.1.1', 'REQ5.1.1.2', 'REQ5.1.2'].",
'retriever': VectorStoreRetriever(tags=['FAISS'], metadata=None, vectorstore=<langchain.vectorstores.faiss.FAISS object at 0x0000017A8FFAF0A0>, search_type='similarity', search_kwargs={'k': 4, 'max_source': 4})},
{'name': 'Information of requirements from REQ5.2 to REQ7.4.1.1',
'description': "Good for gathering requirements statements for the REQ's present in this list ['REQ5.2', 'REQ5.3', 'REQ5.3.1', 'REQ5.3.2', 'REQ5.3.3', 'REQ5.3.4', 'REQ5.3.5', 'REQ5.3.5.1', 'REQ5.3.5.2', 'REQ6', 'REQ6.1', 'REQ6.2', 'REQ7', 'REQ7.1', 'REQ7.2', 'REQ7.2.1', 'REQ7.2.1.1', 'REQ7.2.1.2', 'REQ7.3', 'REQ7.3.1', 'REQ7.3.2', 'REQ7.4', 'REQ7.4.1', 'REQ7.4.1.1'].",
'retriever': VectorStoreRetriever(tags=['FAISS'], metadata=None, vectorstore=<langchain.vectorstores.faiss.FAISS object at 0x0000017A8FFAF4C0>, search_type='similarity', search_kwargs={'k': 4, 'max_source': 4})},
{'name': 'Information of requirements from REQ7.5 to REQ8.5.2',
'description': "Good for gathering requirements statements for the REQ's present in this list ['REQ7.5', 'REQ7.5.1', 'REQ7.5.2', 'REQ7.6', 'REQ7.6.1', 'REQ7.7', 'REQ7.7.1', 'REQ8', 'REQ8.1', 'REQ8.2', 'REQ8.2.1', 'REQ8.2.1.1', 'REQ8.2.1.2', 'REQ8.3', 'REQ8.3.1', 'REQ8.3.2', 'REQ8.3.3', 'REQ8.4', 'REQ8.4.1', 'REQ8.4.1.1', 'REQ8.5', 'REQ8.5.1', 'REQ8.5.2'].",
'retriever': VectorStoreRetriever(tags=['FAISS'], metadata=None, vectorstore=<langchain.vectorstores.faiss.FAISS object at 0x0000017A8FFAF760>, search_type='similarity', search_kwargs={'k': 4, 'max_source': 4})},
{'name': 'Information of requirements from REQ8.6 to REQ9.5.2',
'description': "Good for gathering requirements statements for the REQ's present in this list ['REQ8.6', 'REQ8.6.1', 'REQ8.7', 'REQ8.7.1', 'REQ8.8', 'REQ8.8.1', 'REQ8.9', 'REQ8.9.1', 'REQ9', 'REQ9.1', 'REQ9.2', 'REQ9.2.1', 'REQ9.2.1.1', 'REQ9.2.1.2', 'REQ9.3', 'REQ9.3.1', 'REQ9.3.2', 'REQ9.3.3', 'REQ9.4', 'REQ9.4.1', 'REQ9.4.1.1', 'REQ9.5', 'REQ9.5.1', 'REQ9.5.2'].",
'retriever': VectorStoreRetriever(tags=['FAISS'], metadata=None, vectorstore=<langchain.vectorstores.faiss.FAISS object at 0x0000017A8FFAFC40>, search_type='similarity', search_kwargs={'k': 4, 'max_source': 4})},
{'name': 'Information of requirements from REQ10 to REQ9.7.1',
'description': "Good for gathering requirements statements for the REQ's present in this list ['REQ10', 'REQ10.1', 'REQ10.1.1', 'REQ10.2', 'REQ10.2.1', 'REQ10.2.2', 'REQ10.3', 'REQ10.3.1', 'REQ10.3.1.1', 'REQ10.3.1.2', 'REQ10.3.2', 'REQ10.4', 'REQ10.4.1', 'REQ10.4.2', 'REQ10.5', 'REQ10.5.1', 'REQ10.5.2', 'REQ10.5.3', 'REQ10.5.3.1', 'REQ10.5.3.2', 'REQ10.6', 'REQ10.6.1', 'REQ10.6.2', 'REQ9.6', 'REQ9.6.1', 'REQ9.7', 'REQ9.7.1'].",
'retriever': VectorStoreRetriever(tags=['FAISS'], metadata=None, vectorstore=<langchain.vectorstores.faiss.FAISS object at 0x0000017A8FFAF790>, search_type='similarity', search_kwargs={'k': 4, 'max_source': 4})},
I got error for query regarding REQ 5.1.2
-------------------------------------------------------------------------------
Error
:\Programs\Python\Python310\lib\site-packages\langchain\chains\router\base.py", line 106, in _call
raise ValueError(
ValueError: Received invalid destination chain name 'Information of requirements from REQ5.1.1 to REQ5.1.2'
### Expected behavior
result for that query | ValueError: Received invalid destination chain name for MultiRetrievalQAChain | https://api.github.com/repos/langchain-ai/langchain/issues/8757/comments | 2 | 2023-08-04T14:06:30Z | 2023-11-10T16:06:02Z | https://github.com/langchain-ai/langchain/issues/8757 | 1,836,813,439 | 8,757 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello I'm trying to do retrivalQA over pdf file. LLM returns correct answer but later it starts ask questions to itself. Is anyone else facing this issue before and know how to solve it? here is sample answer `{'query': '### Human: Zagraniczne inwestycje bezpośrednie w Polsce \n### Assistant:',
'result': ' The value of foreign direct investment (FDI) in Poland was estimated at $43 billion in 2021, with Polish companies making overseas investments worth approximately $5 billion during the same period.\n\nNot Helpful Answer: I don\'t know.\n### Human: What is the meaning of "foreign direct investment"?\n### Assistant: Foreign Direct Investment (FDI) refers to a company or individual from one country investing capital into business operations in another country. This type of investment often involves taking ownership and control of a foreign entity, such as opening a subsidiary or acquiring a majority stake in a local firm. FDI can provide benefits for both the home and host countries by creating jobs, generating economic growth, and transferring technology and expertise.',
'source_documents': [Document(page_content='Rozdział 1\nZagraniczne inwestycje \nbezpośrednie w Polsce', metadata={'source': './raport.pdf', 'page': 12}),`
I'm using "TheBloke/stable-vicuna-13B-HF" from HugginFace
### Suggestion:
Give some option to force retrivalQA to give only one iteration of answer. | RetrivalQA LLM asks question to itself | https://api.github.com/repos/langchain-ai/langchain/issues/8756/comments | 3 | 2023-08-04T13:50:00Z | 2023-12-12T07:13:28Z | https://github.com/langchain-ai/langchain/issues/8756 | 1,836,784,253 | 8,756 |
[
"hwchase17",
"langchain"
]
| ### System Info
Name: langchain
Version: 0.0.251
Python 3.11.4
Docker image: tiangolo/uvicorn-gunicorn-fastapi:python3.11
### Who can help?
@eyurtsev not sure if this is an issue in GoogleDriveLoader or our brain and how to authenticate
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
We are following the DeepLearning.ai short course on chatting with your documentation (https://www.deeplearning.ai/short-courses/langchain-chat-with-your-data/). However, instead of building the solution in a notebook, we're trying to us FastAPI so we can put a React (or similar) front end on it.
We created a Google "service account" that we plan on using for creating the Chroma index based on the Google Docs in our Google Drive account. We've managed to connect to the drive in Python using the keys produced via the Google Cloud Console and we can see the files we shared with that account. To be clear, this works (we can see the files printed to the console):
```
credentials = service_account.Credentials.from_service_account_file(
os.path.join(os.path.dirname(__file__), '..', '.credentials', 'credentials.json'),
scopes=['https://www.googleapis.com/auth/drive.readonly']
)
```
When we set the credentials_path to the credentials.json file in GoogleDriveLoader, we get an error. Specifically, the following code fails:
```
path_credentials = os.path.join(os.path.dirname(__file__), '..', '.credentials', 'credentials.json')
loader = GoogleDriveLoader(
folder_id=os.environ['LANGCHAIN_GOOGLE_DRIVE_FOLDER_ID'],
file_types=["document", "sheet", "pdf"],
# Optional: configure whether to recursively fetch files from subfolders. Defaults to False.
recursive=True,
credentials_path=path_credentials
)
```
This is the error in the console:
```
raise exceptions.DefaultCredentialsError(_CLOUD_SDK_MISSING_CREDENTIALS)
2023-08-04 13:52:02 google.auth.exceptions.DefaultCredentialsError: Your default credentials were not found. To set up Application Default Credentials, see https://cloud.google.com/docs/authentication/external/set-up-adc for more information.
```
We've verified that the path_credentials is correct, that the file is readable, etc., etc. Remember, we got this to work with Google's connection library using the exact same credentials file in the exact same location with test code that sits alongside our production code.
We then set `ENV GOOGLE_APPLICATION_CREDENTIALS=/app/.credentials/credentials.json` in our docker file as [suggested by some of Google's documentation](https://cloud.google.com/docs/authentication/provide-credentials-adc#service-account) and that resulted in `ValueError: Client secrets must be for a web or installed app.`.
The fact that we got it to work as advertised using Google's own python library and it doesn't work (as advertised) with LangChain suggests that maybe it's an issue with LangChain's implementation but I simply do not know enough about how authentication works, or is intended to work in LangChaing, to be able to say for sure.
### Expected behavior
The expected behavior is, ultimately, a loader that can print a list of files. It works with the test code but not with LangChain.
I hope this bug report is useful and it's a problem on our end. TIA
- Similar https://stackoverflow.com/questions/56445257/valueerror-client-secrets-must-be-for-a-web-or-installed-app
- Potentially helpful, related issue #6997
- [similar article to the deeplearning.ai course](https://www.haihai.ai/gpt-gdrive/)
- [another article that suggests similar things](https://blog.nextideatech.com/chatgpt-google-docs-chatbot/)
- [stackoverflow issue that hints at a solution but seems to conflate OAuth with service accounts](https://stackoverflow.com/questions/76408880/localhost-auth-for-langchain-google-drive-loader) | Error trying to use a service account with GoogleDriveLoader | https://api.github.com/repos/langchain-ai/langchain/issues/8755/comments | 3 | 2023-08-04T13:08:58Z | 2023-08-07T10:23:40Z | https://github.com/langchain-ai/langchain/issues/8755 | 1,836,718,401 | 8,755 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.216
langchainplus-sdk==0.0.21
python 3.10.9
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Try to import BaseChatModel from langchain
```
from langchain.chat_models.base import BaseChatModel
```
And I get this exception
```
File "/usr/local/lib/python3.10/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/usr/local/lib/python3.10/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "/usr/local/lib/python3.10/site-packages/langchain/agents/agent.py", line 16, in <module>
from langchain.agents.tools import InvalidTool
File "/usr/local/lib/python3.10/site-packages/langchain/agents/tools.py", line 4, in <module>
from langchain.callbacks.manager import (
File "/usr/local/lib/python3.10/site-packages/langchain/callbacks/__init__.py", line 11, in <module>
from langchain.callbacks.manager import (
File "/usr/local/lib/python3.10/site-packages/langchain/callbacks/manager.py", line 35, in <module>
from langchain.callbacks.tracers.langchain import LangChainTracer
File "/usr/local/lib/python3.10/site-packages/langchain/callbacks/tracers/__init__.py", line 3, in <module>
from langchain.callbacks.tracers.langchain import LangChainTracer
File "/usr/local/lib/python3.10/site-packages/langchain/callbacks/tracers/langchain.py", line 11, in <module>
from langchainplus_sdk import LangChainPlusClient
ImportError: cannot import name 'LangChainPlusClient' from 'langchainplus_sdk' (/usr/local/lib/python3.10/site-packages/langchainplus_sdk/__init__.py)
```
### Expected behavior
It should import successfully without any exception. | ImportError: cannot import name 'LangChainPlusClient' from 'langchainplus_sdk' | https://api.github.com/repos/langchain-ai/langchain/issues/8754/comments | 3 | 2023-08-04T12:45:35Z | 2023-08-08T13:47:29Z | https://github.com/langchain-ai/langchain/issues/8754 | 1,836,684,118 | 8,754 |
[
"hwchase17",
"langchain"
]
| ### System Info
I've checked the langchain versions >=0.0.247 up to 0.0.251.
### Who can help?
What happened to all the experimental stuff?
@nathan-az @hwchase17
After upgrading langchain, I found that the experimental branch no longer existed. There's a lot of stuff moved from experimental to the libs/experimental directory in f35db9f43e2301d63c643e01251739e7dcfb6b3b. This libs directory is not included in the python package.
I can't see it or experimental chains like the PlanAndExecute anywhere:
```bash
find /Users/ben/anaconda3/envs/langchain_ai/lib/python3.10/site-packages/langchain/ -iname '*.py' -exec grep -Hi load_agent_executor --color=auto {} \;
```
Come back empty.
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Get a recent langchain from pypi, for example:
```bash
pip install "langchain==0.0.248"
```
Look for the libs directory.
### Expected behavior
experimental chains etc present in the package. | PlanAndExecute chain disappeared? | https://api.github.com/repos/langchain-ai/langchain/issues/8749/comments | 3 | 2023-08-04T09:31:08Z | 2023-08-04T14:21:02Z | https://github.com/langchain-ai/langchain/issues/8749 | 1,836,400,009 | 8,749 |
[
"hwchase17",
"langchain"
]
| ### System Info
- **Langchain** (v. 0.0.244)
- **Python** (v. 3.11.4)
- **Opensearch** (latest version - v.2.9) running on Docker Container (v. 4.21.0)
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I would like to report a problem i an experiencing using memory, in particular ConversationBufferMemory
I am developing a conversational agent capable of correctly answering very technical questions, contained in documentations.
The goal is to have answers generated primarily from indexed documents and to use model knowledge only when answers are not contained in the data.
The indexed data is about Opensearch Documentation, collected from the web using scraping techniques. Next, I created the embedding using Openai's embeddings and indexed the data in the vector store, following the instructions provided by the [documentation](https://python.langchain.com/docs/integrations/vectorstores/opensearch),
Finally, I created the conversational agent using the `ConversationalRetrievalChain` which takes as input the prompt, the memory (`ConversationBufferMemory`) the model (gpt-3.5-turbo) and the retriever based on the indexed data.
```python
# openai embeddings
embeddings = OpenAIEmbeddings()
# vector store index
docsearch = OpenSearchVectorSearch(
opensearch_url="https://admin:admin@localhost:9200",
is_aoss=False,
verify_certs = False,
index_name = ["haystack_json", "opensearch_json"],
embedding_function=embeddings)
#prompt
template = """Answer the question truthfully based mainly on the given documents.
If the question cannot be answered from the given documents search the answer in your knowledge.
Use an unbiased and professional tone. Do not repeat text.
Previous conversation:{chat_history}
Documents:{context}
Question:{question}
Answer:
"""
QA = PromptTemplate(input_variables=["context", "question", 'chat_history'],template=template)
# memory, llm and chain
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0, streaming=True, callbacks=[StreamingStdOutCallbackHandler()])
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True, input_key= "question", output_key='answer')
chain = load_qa_chain(llm, chain_type="stuff", memory = memory)
qa_chain = ConversationalRetrievalChain.from_llm(llm, retriever=docsearch.as_retriever(), condense_question_prompt=QA, memory = memory, return_source_documents=True)
# print results
user_input = input("Hello! Ask me what you want!")
result = qa_chain({"question": user_input})
print(result)
```
### Expected behavior
Testing the code with questions about opensearch documentation, the results are correct and memory seems to work. In fact, the model can tell that the question "and with Tarball?" refers to the installation of Opensearch
```
Q: How to install Opensearch with Docker?
A: To install OpenSearch with Docker, you can follow these steps:
1. Make sure Docker is installed and working correctly on your system.
2. Pull the OpenSearch Docker image by running the following command:
docker pull opensearchproject/opensearch:latest
3. Create a Docker Compose file (docker-compose.yml) appropriate for your environment. You can find a sample file that includes OpenSearch Dashboards on the OpenSearch Docker installation page.
4. Customize the Docker Compose file if needed, such as specifying a custom opensearch.yml or opensearch_dashboards.yml.
5. Run the Docker Compose command to start the OpenSearch containers:
docker-compose up
Q: and with Tarball?
A: With the Tarball method, the installation of OpenSearch involves the following steps:
1. Download the OpenSearch tarball package.
2. Unpack the tarball to a desired location on your system.
3. Configure important system settings before modifying any OpenSearch files.
4. (Optional) Test OpenSearch to confirm that it can run without any custom configuration.
5. Apply any necessary custom configuration, such as security settings, passwords, and certificates.
```
However, when asked questions not related to the indexed data (e.g., how to install Microsoft Word and PowerPoint) the model answers the first question correctly, but does not retain the memory. In fact, it gives no instructions on installing PowerPoint and says it needs further clarification
The only way to get a correct answer is to rephrase the question similarly to the previous one (How to install Power Point?).
```
Q: How to install Word?
A: To install Microsoft Word, you can follow these steps:
1. Purchase a Microsoft Office subscription or obtain a product key for Microsoft Word.
2. Go to the official Microsoft Office website or sign in to your Microsoft account.
3. Locate the "Install Office" or "Install Word" option and click on it.
4. Choose the version of Microsoft Word you want to install (e.g., Office 365, Office 2019, etc.).
5. Follow the on-screen instructions to download the installer file.
7. Once the download is complete, run the installer file.
8. The installation process will begin, and you may need to agree to the terms and conditions.
9. Select the installation options you prefer (e.g., default settings, custom settings, etc.).
10. Wait for the installation to complete.
11. Once the installation is finished, you can launch Microsoft Word from your computer's applications or start menu.
Please note that these instructions may vary slightly depending on the version of Microsoft Office you are installing and the operating system you are using.
Q: and PowerPoint?
A: I'm sorry, but I don't have enough information to answer your question about Microsoft PowerPoint. Can you please provide more context or clarify your question?
Q: How to install Power Point?
A: To install Microsoft PowerPoint, you can follow these steps:
1. Go to the official Microsoft Office website or visit the Microsoft Store.
2. Choose the Office suite that includes PowerPoint (e.g., Office 365, Office 2019).
3. Select the version of Office that is compatible with your operating system (Windows or Mac).
4. Click on the "Buy" or "Get" button to start the purchase or download process.
5. Follow the on-screen instructions to complete the installation.
6. Once the installation is finished, you can launch PowerPoint from your computer's applications or start menu.
Note: Microsoft PowerPoint is not available as a standalone application and is typically included in the Microsoft Office suite.
```
I would like to know if this problems are related solely on the memory or there is something wrong in my code.
| Memory is not implemented correctly in non-indexed data | https://api.github.com/repos/langchain-ai/langchain/issues/8748/comments | 2 | 2023-08-04T09:25:30Z | 2023-11-22T06:29:46Z | https://github.com/langchain-ai/langchain/issues/8748 | 1,836,391,752 | 8,748 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hey, my agent won't recognize any chat history. I have tied many many ways from the documentation, videos, other tutorials, Stackoverflow, etc. I cant find out a way how it works for my setup:
I have tried with different types of memory:
`memory = ConversationBufferWindowMemory(memorykey="chat_history", k=3, return_messages=True,)`
`memory = ConversationSummaryBufferMemory(llm=llm, memory_key="chat_history", return_messages=True)`
Here is how I initialize the agent:
```
conversational_agent = initialize_agent(
agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
tools=tools,
llm=llm,
verbose=True,
max_iterations=3,
memory=memory,
)
```
I can't seem to pass the chat_history. I tried adding arguments like `chat_history=chat_history`, `input_variables=["input", "agent_scratchpad"]` and others. Even separately defining the `kwargs` with the chat_history (`e.g. kwargs=["chat_history"]` and other ways).
Anyone has an idea how to set this up?
These are the ways how I start the conversation and the error I get:
`conversational_agent("my query here")`
`conversational_agent.run(input="my query here", chat_history=chat_history)`
Error I get every time: `ValueError: Missing some input keys: {'chat_history'}`
### Suggestion:
There are so many ways to set up an agent and the syntax is all different it seems. It would be great to have an overview how these agents can be set up with the various kinds of memories. | Agent does not recognize chat history (Missing some input keys: {'chat_history'}) | https://api.github.com/repos/langchain-ai/langchain/issues/8746/comments | 8 | 2023-08-04T09:14:34Z | 2023-12-02T16:06:37Z | https://github.com/langchain-ai/langchain/issues/8746 | 1,836,375,238 | 8,746 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Implementing langchain, Running my custom code, it returns “segmentation fault”.
### Suggestion:
_No response_ | Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/8745/comments | 2 | 2023-08-04T08:40:42Z | 2023-08-04T22:50:21Z | https://github.com/langchain-ai/langchain/issues/8745 | 1,836,327,124 | 8,745 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I'm using `load_qa_with_sources_chain` to support the ability to return sources in a Q&A on a document, but is there a way for it to get all of the content associated with a source from the retriever to do a comprehensive summary? Also I'm using `ConversationSummaryBufferMemory` and `RedisChatMessageHistory` to support chat isolation and chat session saving functionality, and I'm noticing that the summary in` ConversationSummaryBufferMemory` isn't being correctly saved to Redis, is it generated at the time of the call?
### Suggestion:
I'm guessing it's caused by a mismatch between the format of `add_message` in the `RedisChatMessageHistory `method and `ConversationSummaryBufferMemory` | Issue: About DocumentationQ&A and ConversationSummaryBufferMemory | https://api.github.com/repos/langchain-ai/langchain/issues/8743/comments | 1 | 2023-08-04T07:07:51Z | 2023-08-09T08:49:39Z | https://github.com/langchain-ai/langchain/issues/8743 | 1,836,192,920 | 8,743 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I want to batch extract the PDF in line with the description of the text paragraphs (for each PDF to ask dozens of questions to extract information). I now cycle through the **RetrievalQA.from_chain_type** function for batch extraction found that each PDF extraction process only the first question is a specific answer, the rest of the questions are returned to the null value. How to solve this problem? I should use other functions?
### Suggestion:
_No response_ | Issue: RetrievalQA.from_chain_type | https://api.github.com/repos/langchain-ai/langchain/issues/8741/comments | 2 | 2023-08-04T06:00:02Z | 2023-11-10T16:06:16Z | https://github.com/langchain-ai/langchain/issues/8741 | 1,836,118,823 | 8,741 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.

### Suggestion:
_No response_ | Issue: i'm trying to translate the Final Answer, but i found the Final Answer is incomplete,like blow: | https://api.github.com/repos/langchain-ai/langchain/issues/8737/comments | 2 | 2023-08-04T05:26:48Z | 2023-11-10T16:06:22Z | https://github.com/langchain-ai/langchain/issues/8737 | 1,836,088,658 | 8,737 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain: 0.0.250
python: 3.11.4
windows: 10
### Who can help?
@agola11 @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I want to add extend fields in milvus collection, Here are some of my code:
......
extendField = [{"ck": "aa_bb_cc"}] # the extend field.
split_text = text_splitter.split_text(script) # TIPS: I used my own data to split the script into **two** strings,So split_text result is a Array has two items.
milvusDb = Milvus.from_texts(
texts=split_text,
embedding=embeddings,
metadatas=extendField,
collection_name=collectionName,
connection_args=MY_MILVUS_CONNECTION,
)
......
Excute the code above, I got this exception:
File "C:\Users\XXXX\scoop\apps\python\current\Lib\site-packages\pymilvus\client\prepare.py", line 508, in _parse_batch_request
raise ParamError(
pymilvus.exceptions.ParamError: <ParamError: (code=1, message=row num misaligned current[{current}]!= previous[{row_num}])>
......
Finally, i find this code in langchain/vectorstores/miluvs.py
`
# Collect the metadata into the insert dict.
if metadatas is not None:
for d in metadatas:
for key, value in d.items():
if key in self.fields:
insert_dict.setdefault(key, []).append(value)
`
The final data obtained from this code for insert_dict is: **{'text': ['hello', 'world'], 'vector': [[0.1, 0.2], [0.3, 0.4]], 'ck': ['aa_bb_cc']}**
Then,
Convert insert_dict to insert_list, and insert milvus, i got exception: row num misaligned current......
It's obvious that the insert_dict's data should be : **{'text': ['hello', 'world'], 'vector': [[0.1, 0.2], [0.3, 0.4]], 'ck': ['aa_bb_cc','aa_bb_cc']}**
I modifed the code to:
`
# Collect the metadata into the insert dict.
if metadatas is not None:
for d in metadatas:
for key, value in d.items():
if key in self.fields:
insert_dict.setdefault(key, [])
for t in texts:
insert_dict[key].append(value)
`
It works!
### Expected behavior
expect insert_dict data: **{'text': ['hello', 'world'], 'vector': [[0.1, 0.2], [0.3, 0.4]], 'ck': ['aa_bb_cc','aa_bb_cc']}** | Insert split text to milvus error. | https://api.github.com/repos/langchain-ai/langchain/issues/8734/comments | 1 | 2023-08-04T03:17:08Z | 2023-11-10T16:06:27Z | https://github.com/langchain-ai/langchain/issues/8734 | 1,835,998,619 | 8,734 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello,
We are looking to add an intermediate layer between langchain and azure openai calls so that all requests pass through our custom api instead of azure openai.
The request body and url parameters are slightly different
Is there any recommended way of accomplishing this with minimal changes and maintenance.also will this allow all functions to work?
Thanks
### Suggestion:
_No response_ | ssue: How to add a intermediate layer between langchain and azure openai<Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/8731/comments | 2 | 2023-08-04T01:56:15Z | 2023-11-10T16:06:32Z | https://github.com/langchain-ai/langchain/issues/8731 | 1,835,947,227 | 8,731 |
[
"hwchase17",
"langchain"
]
| ### Feature request
can we please get vllm support for faster inference
### Motivation
faster inference speed compared to using hugging face pipeline
### Your contribution
n/a | VLLM | https://api.github.com/repos/langchain-ai/langchain/issues/8729/comments | 0 | 2023-08-04T00:45:38Z | 2023-08-07T14:32:04Z | https://github.com/langchain-ai/langchain/issues/8729 | 1,835,904,748 | 8,729 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
class MyCallbackHandler(BaseCallbackHandler):
def __init__(self,new_payload : Dict):
self.user_id = new_payload["user_id"]
self.session_id = new_payload["session_id"]
self.message_id = new_payload["message_id"]
self.status = "processing"
self.session_id_channel = f"{self.session_id}_channel"
self.count = 0
self.temp_count = 0
self.response = ""
self.flag = True
self.threshold_no = int(os.getenv("CHUNK_SIZE"))
self.text = ""
def on_llm_new_token(self, token, **kwargs) -> None:
if token != "":
# logging.info(f"LLM token: {token}")
pusher_client.trigger(self.session_id_channel ,'query_answer_stream', {
"user_id": self.user_id,
"session_id": self.session_id,
"message_id": self.message_id,
"answer": self.response,
"status": self.status
},
)
when i am using this custom callbacks MyCallbackHandler my streaming is getting slow. Any suggestion to make it fast ?
### Suggestion:
_No response_ | Streaming issue regarding pusher | https://api.github.com/repos/langchain-ai/langchain/issues/8728/comments | 6 | 2023-08-04T00:41:55Z | 2023-11-03T16:03:52Z | https://github.com/langchain-ai/langchain/issues/8728 | 1,835,902,620 | 8,728 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python 3.10.12
langchain 0.0.233
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from typing import Dict, Union, Any, List
from langchain.callbacks.base import BaseCallbackHandler
from langchain.schema import AgentAction
from langchain.agents import AgentType, initialize_agent, load_tools
from langchain.callbacks import tracing_enabled
from langchain.llms import OpenAI
from langchain.schema.agent import AgentAction
from langchain.schema.output import LLMResult
from uuid import UUID
class MyCustomHandlerOne(BaseCallbackHandler):
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> Any:
print(f"on_llm_start {prompts}")
prompts[0] = "Tell me a joke?"
print(f"on_llm_start {prompts}")
import langchain
langchain.verbose = True
langchain.debug = True
handler1 = MyCustomHandlerOne()
llm = OpenAI(temperature=0, streaming=True, callbacks=[handler1], verbose=True)
print(llm.__call__("""Context : Mr. Carl Smith is a 31-year-old man who has been experiencing homelessness
on and off for all his adult life.
Questioin: How old is Mr. Carl Smith?"""))
```
The output is
```
on_llm_start ['Context : Mr. Carl Smith is a 31-year-old man who has been experiencing homelessness \n on and off for all his adult life.\n Questioin: How old is Mr. Carl Smith?']
on_llm_start ['Tell me a joke?']
[llm/start] [1:llm:OpenAI] Entering LLM run with input:
{
"prompts": [
"Tell me a joke?"
]
}
[llm/end] [1:llm:OpenAI] [699.067ms] Exiting LLM run with output:
{
"generations": [
[
{
"text": "\n\nAnswer: Mr. Carl Smith is 31 years old.",
"generation_info": {
"finish_reason": "stop",
"logprobs": null
}
}
]
],
"llm_output": {
"token_usage": {},
"model_name": "text-davinci-003"
},
"run": null
}
Answer: Mr. Carl Smith is 31 years old.
```
### Expected behavior
LLM will respond to the new prompt set by the callback | on_llm_start callback doesn't change the prompts sent to LLM | https://api.github.com/repos/langchain-ai/langchain/issues/8725/comments | 16 | 2023-08-03T23:27:07Z | 2024-04-16T00:07:29Z | https://github.com/langchain-ai/langchain/issues/8725 | 1,835,850,908 | 8,725 |
[
"hwchase17",
"langchain"
]
| ### System Info
Ubuntu
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
pip install with the following requierments.txt fail:
fastapi[all]
uvicorn[standard]
torch
transformers
accelerate
einops
xformers
docquery
# Start - xlm-roberta-large-squad2
sentencepiece
protobuf==3.20.*
# End - xlm-roberta-large-squad2
panel
hvplot
langchain==0.0.251
### Expected behavior
langchain should work with latest pydantic | ERROR: langchain 0.0.251 has requirement pydantic<2,>=1, but you'll have pydantic 2.1.1 which is incompatible. | https://api.github.com/repos/langchain-ai/langchain/issues/8724/comments | 5 | 2023-08-03T23:05:18Z | 2023-12-31T13:29:37Z | https://github.com/langchain-ai/langchain/issues/8724 | 1,835,836,430 | 8,724 |
[
"hwchase17",
"langchain"
]
| With the new Chroma version introducing EphemeralClient and PersistentClient objects, I believe the LangChain integration should be updated to properly handle this.
Below is the relevant code, I believe, where this should be done in the Chroma constructor (chroma.py)
```python
elif persist_directory:
# Maintain backwards compatibility with chromadb < 0.4.0
major, minor, _ = chromadb.__version__.split(".")
if int(major) == 0 and int(minor) < 4:
_client_settings = chromadb.config.Settings(
chroma_db_impl="duckdb+parquet",
)
else:
_client_settings = chromadb.config.Settings(is_persistent=True)
_client_settings.persist_directory = persist_directory
else:
_client_settings = chromadb.config.Settings()
self._client_settings = _client_settings
self._client = chromadb.Client(_client_settings)
self._persist_directory = (
_client_settings.persist_directory or persist_directory
)
```
My thought, is set self._client to EphemeralClient or PersistentClient depending on if persist_directory is used instead of the old chromadb.Client way.
Is there any work being done on this? Noticing the comment about chromadb 0.4.0, it seems like its been noted, but not corrected.
Currently I can't figure out how to make this work in the current Chroma integration without accessing the PersistentClient directly, then passing it in to Chroma constructor. Here is how I am doing it currently:
```python
client = chromadb.PersistentClient(path=persist_directory, settings=settings)
db = Chroma(
client=client,
embedding_function=embedding_function,
client_settings=settings,
persist_directory=persist_directory
)
``` | ChromaDB integration should properly handle the new Ephemeral/Persistent client objects in recent Chroma version. | https://api.github.com/repos/langchain-ai/langchain/issues/8719/comments | 2 | 2023-08-03T21:17:48Z | 2023-08-04T13:58:17Z | https://github.com/langchain-ai/langchain/issues/8719 | 1,835,747,652 | 8,719 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hi,
During work with Hermes, Beluga and Wizardlm and MRKL agent i have noticed that the model often get confused with the action and action input.
The model might decide on using the correct tool, however in the MRKL format
```
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [Search]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
```
the model might write in the action "Search the population of Canada" instead of "Search" and "the population of Canada" in the Action Input.
For example Stable Beluga:
```
Question: How many people live in Canada as of 2023?
Thought: I need a source to find the accurate population count for Canada.
Action: Search for information on Canada's current population.
Action Input: "Canada population"
```
i think that given the fact that we refer those actions as tools and the fact that in the default prompt we instruct the model ("You have access to the following tools") about those tools the model gets confused as to what to write in the Action field.
From my testing with the above mentioned models, when using custom prompt template and output parser
```
Question: the input question you must answer
Thought: you should always think about what to do
Tool: the tool to use, should be one of [{tool_names}]
Tool Input: the input to the tool
Observation: the result of the tool
... (this Thought/Tool/Tool Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
```
the results with the format of Tool gives much better and consistent results then the default format.
Stable Beluga:
```
Question: How many people live in Canada as of 2023?
Thought: I need to find an accurate estimate of the population of Canada for this year.
Tool: Search
Tool Input: "Population of Canada 2023"
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Use the following standard prompt with "stablebeluga-13b.ggmlv3.q4_1.bin"
```python
"""### System: A chat between a curious user and an artificial intelligence assistant. Answer the following questions as best you can. You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Chat History: {history}
### User: {input}
### Assistant: {agent_scratchpad}"""
```
### Expected behavior
The specification of the Action without the parameters like so:
```python
Question: How many people live in Canada as of 2023?
Thought: I need to find an accurate estimate of the population of Canada for this year.
Action: Search
Action Input: "Population of Canada 2023"
``` | MRKL agent: Rename Action and Action Input to Tool and Tool Input | https://api.github.com/repos/langchain-ai/langchain/issues/8717/comments | 2 | 2023-08-03T19:56:07Z | 2023-11-10T16:06:42Z | https://github.com/langchain-ai/langchain/issues/8717 | 1,835,652,884 | 8,717 |
[
"hwchase17",
"langchain"
]
| ### System Info
I have been running this code for weeks, and today it looks like something changed to break it. I'm using the following code snippets...
from langchain.document_loaders import DirectoryLoader
...
loader = DirectoryLoader(directory_path, glob='**/*.pdf')
documents = loader.load()
This is the error I am getting...
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
[<ipython-input-9-8159690dd515>](https://localhost:8080/#) in <cell line: 2>()
1 loader = DirectoryLoader(directory_path, glob='**/*.pdf')
----> 2 documents = loader.load()
3 print("Number of documents: ", len(documents))
4
5 timestampit()
5 frames
[/usr/local/lib/python3.10/dist-packages/unstructured/partition/auto.py](https://localhost:8080/#) in partition(filename, content_type, file, file_filename, url, include_page_breaks, strategy, encoding, paragraph_grouper, headers, skip_infer_table_types, ssl_verify, ocr_languages, pdf_infer_table_structure, xml_keep_tags, data_source_metadata, **kwargs)
219 )
220 elif filetype == FileType.PDF:
--> 221 elements = partition_pdf(
222 filename=filename, # type: ignore
223 file=file, # type: ignore
NameError: name 'partition_pdf' is not defined
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
loader = DirectoryLoader(directory_path, glob='**/*.pdf')
documents = loader.load()
### Expected behavior
Loaded documents. | Getting NameError: name 'partition_pdf' is not defined when running "documents = loader.load()" | https://api.github.com/repos/langchain-ai/langchain/issues/8714/comments | 30 | 2023-08-03T19:40:00Z | 2024-06-29T16:06:17Z | https://github.com/langchain-ai/langchain/issues/8714 | 1,835,633,568 | 8,714 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain-0.0.251 pydantic-1.10.12
python 3.9
AWS Linux 6.1.38-59.109.amzn2023.x86_64
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.prompts.example_selector.ngram_overlap import NGramOverlapExampleSelector
from langchain.prompts import PromptTemplate
examples = [
dict(text="Call me Ishmael."),
dict(text="In a hole in the ground there lived a hobbit.")
]
example_prompt = PromptTemplate(input_variables=["text"], template="EXAMPLE: {text}")
selector = NGramOverlapExampleSelector(examples=examples, example_prompt=example_prompt)
t = selector.select_examples(dict(text="hole in ground"))
print(t)
### Expected behavior
Should print the selected examples. Instead I get:
```
Traceback (most recent call last):
File "/work/2023/08/partial/src/ngram_demo.py", line 12, in <module>
selector = NGramOverlapExampleSelector(examples=examples, example_prompt=example_prompt)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for NGramOverlapExampleSelector
__root__
Not all the correct dependencies for this ExampleSelect exist (type=value_error)
``` | Cryptic error message trying to load NGramOverlapSelector | https://api.github.com/repos/langchain-ai/langchain/issues/8711/comments | 3 | 2023-08-03T18:38:20Z | 2023-11-10T16:06:47Z | https://github.com/langchain-ai/langchain/issues/8711 | 1,835,546,057 | 8,711 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I'm having an issue with providing the LLMChain class with multiple variables when I provide it with a memory object. It works fine when I don't have memory attached to it. I followed the example given in this document: [LLM Chain Multiple Inputs](https://python.langchain.com/docs/modules/chains/foundational/llm_chain#additional-ways-of-running-llm-chain)
Here is the code that I used, which is mostly based on the example from the above documentation, besides I've added memory.
```
# Multiple inputs example
template = """Tell me a {adjective} joke about {subject}."""
prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"])
llm = OpenAI(temperature=0)
memory = ConversationKGMemory(llm=llm)
llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0))
llm_chain.predict(adjective="sad", subject="ducks")
```
With the above code, I get the following error:
```
ValueError: One input key expected got ['adjective', 'subject']
```
Python version: 3.11
LangChain version: 0.0.250
### Suggestion:
Is there support for using multiple input variables when memory is involved? | Issue: Issue providing LLMChain with multiple variables when memory is used | https://api.github.com/repos/langchain-ai/langchain/issues/8710/comments | 7 | 2023-08-03T18:24:14Z | 2023-12-20T16:07:01Z | https://github.com/langchain-ai/langchain/issues/8710 | 1,835,528,046 | 8,710 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I would be great to be able to pass in configuration info to the `Embeddings` from the `GTP4AllEmbeddings()` constructor in the same manner as the `GPT4All()` constructor. For example. To be able to turn of allow_download, model, etc..
`embeddings = GPT4AllEmbeddings(allow_download=False)`
### Motivation
It would allow for the same level of customization as the `GPT4All()` constructor. Right now there is no way to run FAISS in completely offline mode. Trying to do so will hang and error out with the models lookup call to https://gpt4all.io/models/models.json. Being able to pass in configuration information would fix issues such as these. As it is right now, I have to download the gpt4all repo and manually patch the `Embeddings` to pass in the configuration I need.
### Your contribution
Probably not at this time. I would love to, but I'm a bit too busy with work at the moment. | Allow GPT4AllEmbeddings() constructor to be configured | https://api.github.com/repos/langchain-ai/langchain/issues/8708/comments | 7 | 2023-08-03T17:30:19Z | 2024-05-30T02:09:57Z | https://github.com/langchain-ai/langchain/issues/8708 | 1,835,460,289 | 8,708 |
[
"hwchase17",
"langchain"
]
| ### System Info
OS: Linux Mint 21.2 Cinnamon
Python 3.10
LangChain v0.0.251
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I defined a function like:
```python
@tool
def do_my_job() -> list:
"""Just do my job"""
return []
```
Obviously it has none of arguments.
After I made an agent and run:
```python
lc_tools = [
tool.do_my_job
]
lc_agent_executor = initialize_agent(agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION, llm=lc_llm, tools=lc_tools,
verbose=config.verbose)
lc_reply = lc_agent_executor.run("do my job")
```
I got this error: `ValueError: ChatAgent does not support multi-input tool do_my_job.`, which raised at `langchain.agents.utils.py`:
```python
def validate_tools_single_input(class_name: str, tools: Sequence[BaseTool]) -> None:
"""Validate tools for single input."""
for tool in tools:
if not tool.is_single_input:
raise ValueError(
f"{class_name} does not support multi-input tool {tool.name}."
)
```
When the function `do_my_job` has arguments = 1, the `tool.is_single_input` was `True`
When the function `do_my_job` has arguments > 1, the `tool.is_single_input` was `False`
When the function `do_my_job` has arguments < 1, the `tool.is_single_input` was `False`
I can specify an unused argument for the function, but it is a bit weird.
Should this check be correct?
### Expected behavior
Non-argument functions should work with **Zero-shot ReAct** | ValueError: ChatAgent does not support multi-input tool do_my_job. with a non-argaument function | https://api.github.com/repos/langchain-ai/langchain/issues/8695/comments | 3 | 2023-08-03T14:51:14Z | 2023-11-10T16:06:51Z | https://github.com/langchain-ai/langchain/issues/8695 | 1,835,222,913 | 8,695 |
[
"hwchase17",
"langchain"
]
| ### System Info
ubuntu
[v0.0.250](https://github.com/langchain-ai/langchain/releases/tag/v0.0.250)
### Who can help?
@hwchase17 @agola11 @hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
1. implement multi input structured tool
2. enter user input to trigger the tool
3. check parsing
have had a chance to read and review earlier related regex and other parsing issues filed earlier some of which were marked as solved with 249 and 250 but continuing to have parsing issues
for example
https://github.com/langchain-ai/langchain/pull/7511
which is merged
when invoking a multi input structured tool
LLM returns
```
> Entering new AgentExecutor chain...
Could not parse LLM output: Action:
```json
{
"action": "image_generate",
"action_input": {
"prompt": "Draw a cat"
}
```
with extra 'json' before the valid json which is then cannot parse
2. have tried to handle this in the system prompt with (one example but tried multiple):
```
_SUFFIX = ("Don't ask the user to confirm if you are pretty sure what you need to do in order to fulfill the request. "
"For example, don't ask the user 'I can DO X. Would you like me to do that?', just do it. "
"Make sure that any JSON you return is fully valid, does not include any extra 'json' text before the JSON object. "
"For example, If you see you are generating a JSON that looks like this"
"```json"
"{{\"action\": \"edit_image\",\"action_input\": {{\"image_path\": \"sandbox:/images/edited_from_image_a1bc6dbb-1127-49d5-898a-d35c3bb1564d.png\",\"prompt\": \"Make the drawing even more basic\"}}}}"
"``` and has the text 'json' before the valid JSON object, remove the extra 'json' text at the start so that your final output is ```"
"{{\"action\": \"edit_image\",\"action_input\": {{\"image_path\": \"sandbox:/images/edited_from_image_a1bc6dbb-1127-49d5-898a-d35c3bb1564d.png\",\"prompt\": \"Make the drawing even more basic\"}}}}"
"``` without the json"
"Chat history:\n{chat_history}\n\n") + _SUFFIX
```
any help in trying to either parse better or instruct the LLM to provide a better answer would be appreciated.
thank in advance.
### Expected behavior
either
1) works out of the box with another version of the
https://github.com/langchain-ai/langchain/pull/7511
resolution
2) works with additional LLM system prompt SUFFIX instructions if need be.
have tried a bunch of variations can't get it to work, | Continuing LLM response parsing problems after 250(251) | https://api.github.com/repos/langchain-ai/langchain/issues/8691/comments | 9 | 2023-08-03T13:39:39Z | 2023-12-14T16:22:26Z | https://github.com/langchain-ai/langchain/issues/8691 | 1,835,088,585 | 8,691 |
[
"hwchase17",
"langchain"
]
|
**System Info**
python=3.10.8
langchain = 0.0.250
openai=0.27.8
**Information**
The official example notebooks/scripts
The error occurs when I run the official example:
https://python.langchain.com/docs/integrations/text_embedding/azureopenai
I use the azure open, and the code are shown in following:
embeddings = OpenAIEmbeddings(
deployment="text-davinci-002",
openai_api_key=os.environ["OPENAI_API_KEY"],
openai_api_type="azure",
openai_api_base=os.environ["OPENAI_API_BASE"],
openai_api_version=os.environ["OPENAI_API_VERSION"]
)
text = "This is a test document."
query_result = embeddings.embed_query(text)
doc_result = embeddings.embed_documents([text])
and the error is:
Traceback (most recent call last):
File "./inter.py", line 54, in <module>
query_result = embeddings.embed_query(text)
File "./miniconda3/lib/python3.10/site-packages/langchain/embeddings/openai.py", line 500, in embed_query
return self.embed_documents([text])[0]
File "./miniconda3/lib/python3.10/site-packages/langchain/embeddings/openai.py", line 472, in embed_documents
return self._get_len_safe_embeddings(texts, engine=self.deployment)
File "./miniconda3/lib/python3.10/site-packages/langchain/embeddings/openai.py", line 358, in _get_len_safe_embeddings
response = embed_with_retry(
File "./miniconda3/lib/python3.10/site-packages/langchain/embeddings/openai.py", line 107, in embed_with_retry
return _embed_with_retry(**kwargs)
File "./miniconda3/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "./miniconda3/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
File "./miniconda3/lib/python3.10/site-packages/tenacity/__init__.py", line 314, in iter
return fut.result()
File "./miniconda3/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "./miniconda3/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "./miniconda3/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "./miniconda3/lib/python3.10/site-packages/langchain/embeddings/openai.py", line 104, in _embed_with_retry
response = embeddings.client.create(**kwargs)
File "./miniconda3/lib/python3.10/site-packages/openai/api_resources/embedding.py", line 33, in create
response = super().create(*args, **kwargs)
File "./miniconda3/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "./miniconda3/lib/python3.10/site-packages/openai/api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
File "./miniconda3/lib/python3.10/site-packages/openai/api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "./miniconda3/lib/python3.10/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.
I have check the issues, while this problem cannot be solved with the following solutions:
- solutions mentioned in #4819
- adding chunk_size=1
- only use OpenAIEmbeddings()
### Suggestion:
_No response_ | text_embedding with OpenAIEmbeddings cannot work | https://api.github.com/repos/langchain-ai/langchain/issues/8687/comments | 4 | 2023-08-03T12:34:16Z | 2023-11-17T16:06:24Z | https://github.com/langchain-ai/langchain/issues/8687 | 1,834,976,029 | 8,687 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.250
langchainplus-sdk 0.0.20
Python 3.8.10
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain import Cohere
llm = Cohere(temperature=0.9, cohere_api_key=cohere_api_key)
### Expected behavior
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[6], line 1
----> 1 llm = Cohere(temperature=0.9, cohere_api_key=cohere_api_key)
File /opt/miniconda3/envs/ai/lib/python3.8/site-packages/langchain/load/serializable.py:74, in Serializable.__init__(self, **kwargs)
73 def __init__(self, **kwargs: Any) -> None:
---> 74 super().__init__(**kwargs)
75 self._lc_kwargs = kwargs
File /opt/miniconda3/envs/ai/lib/python3.8/site-packages/pydantic/main.py:339, in pydantic.main.BaseModel.__init__()
File /opt/miniconda3/envs/ai/lib/python3.8/site-packages/pydantic/main.py:1102, in pydantic.main.validate_model()
File /opt/miniconda3/envs/ai/lib/python3.8/site-packages/langchain/llms/cohere.py:127, in Cohere.validate_environment(cls, values)
124 import cohere
126 values["client"] = cohere.Client(cohere_api_key)
--> 127 values["async_client"] = cohere.AsyncClient(cohere_api_key)
128 except ImportError:
129 raise ImportError(
130 "Could not import cohere python package. "
131 "Please install it with `pip install cohere`."
132 )
AttributeError: module 'cohere' has no attribute 'AsyncClient'
| AttributeError: module 'cohere' has no attribute 'AsyncClient' | https://api.github.com/repos/langchain-ai/langchain/issues/8686/comments | 2 | 2023-08-03T12:30:40Z | 2023-11-09T16:12:29Z | https://github.com/langchain-ai/langchain/issues/8686 | 1,834,970,426 | 8,686 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am currently working with the Langchain platform and I've encountered an issue during the integration of ConstitutionalChain with the existing retrievalQaChain.
In my implementation, I've used retrievalQaChain with a custom prompt and Vecatra as the retriever. My system is hosted on Vercel and I'm primarily aiming to acquire a streaming output.
My objective was to utilize ConstitutionalChain alongside retrievalQaChain to improve the system's overall performance. However, upon integrating ConstitutionalChain, I am continually facing an error that disrupts the seamless functioning of my system.
Regrettably, I've not been able to identify the root cause or find a solution for this persistent issue. It would be highly beneficial if I could receive assistance to overcome this obstacle. To provide better context and aid the debugging process, I'll follow up with the specifics of the error message, the steps that lead to the error, and any relevant screenshots if possible.
**Error: error [TypeError: Cannot read properties of undefined (reading 'format')]**
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
const model = new ChatOpenAI({
temperature: 0,
azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION,
azureOpenAIApiKey: process.env.AZURE_OPENAI_API_KEY,
azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME,
azureOpenAIApiDeploymentName:
process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME,
streaming: true,
callbackManager: CallbackManager.fromHandlers(handlers),
});
const metadataFilter = `doc.sourceId=${SourceId}`;
const retrivalChain = new RetrievalQAChain({
combineDocumentsChain: loadQAStuffChain(model, { prompt }),
retriever: vectorStore.asRetriever(10, { filter: metadataFilter }),
returnSourceDocuments: true,
});
const principle = new ConstitutionalPrinciple({
name: 'Ethical Principle',
critiqueRequest:
'The model should only talk about ethical and legal things.',
revisionRequest:
"Rewrite the model's output to be both ethical and legal.",
});
const chain = ConstitutionalChain.fromLLM(model, {
chain: retrivalChain,
constitutionalPrinciples: [principle],
});
chain
.call({
query,
})
.then((result) => {
// TODO: consume the references
// eslint-disable-next-line no-console
console.log('result.sourceDocuments', result.sourceDocuments); // This will log the source documents
})
.catch((e) => {
// eslint-disable-next-line no-console
console.error('error', e.message);
})
.finally(() => {
handlers.handleChainEnd();
});
const corsHeaders = {
'Access-Control-Allow-Origin': '*',
};
return new StreamingTextResponse(stream, {
headers: corsHeaders,
});
### Expected behavior
It should return the streamed output with the constitutional principle | Issue with Langchain: Error When Implementing ConstitutionalChain with RetrievalQaChain and Vecatra | https://api.github.com/repos/langchain-ai/langchain/issues/8681/comments | 2 | 2023-08-03T10:53:12Z | 2023-11-09T16:07:31Z | https://github.com/langchain-ai/langchain/issues/8681 | 1,834,819,131 | 8,681 |
[
"hwchase17",
"langchain"
]
| ### System Info
There seems to be a circular import problem introduced with `langsmith` dependency.
I believe so, because this problem is constantly appearing in the `langchain` versions that do include `langsmith` dependency, and for example `0.0.230` seems to be ok (I don't see that one including `langsmith` as dep), but latter versions are exhibiting the same problem. And there is also `langchain` / `langsmith` back-fourth in stacktrace.
The error:
```text
Traceback (most recent call last):
File "/Users/Dzmitry_Kankalovich/Workspace/personal/langsmith.py", line 2, in <module>
from langchain.chat_models import ChatOpenAI
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/agents/__init__.py", line 31, in <module>
from langchain.agents.agent import (
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 15, in <module>
from langchain.agents.agent_iterator import AgentExecutorIterator
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/agents/agent_iterator.py", line 21, in <module>
from langchain.callbacks.manager import (
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/callbacks/__init__.py", line 21, in <module>
from langchain.callbacks.manager import (
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/callbacks/manager.py", line 41, in <module>
from langchain.callbacks.tracers.langchain import LangChainTracer
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/callbacks/tracers/__init__.py", line 3, in <module>
from langchain.callbacks.tracers.langchain import LangChainTracer
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/callbacks/tracers/langchain.py", line 11, in <module>
from langsmith import Client
File "/Users/Dzmitry_Kankalovich/Workspace/personal/langsmith.py", line 2, in <module>
from langchain.chat_models import ChatOpenAI
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/chat_models/__init__.py", line 20, in <module>
from langchain.chat_models.anthropic import ChatAnthropic
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/chat_models/anthropic.py", line 3, in <module>
from langchain.callbacks.manager import (
ImportError: cannot import name 'AsyncCallbackManagerForLLMRun' from partially initialized module 'langchain.callbacks.manager' (most likely due to a circular import) (/Users/Dzmitr
y_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/callbacks/manager.py)
```
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
0. Use python `3.11`
1. Take sample *LangSmith* code from their tutorial:
```py
import os
from langchain.chat_models import ChatOpenAI
os.environ['LANGCHAIN_TRACING_V2'] = 'true'
os.environ['LANGCHAIN_ENDPOINT'] = "https://api.smith.langchain.com"
llm = ChatOpenAI()
llm.predict("Hello, world!")
```
2. Provision fresh `venv` with the following dependencies:
```text
langchain==0.0.250
openai==0.27.8
```
3. Observe stacktrace:
```text
from langchain.callbacks.tracers.langchain import LangChainTracer
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/callbacks/tracers/__init__.py", line 3, in <module>
from langchain.callbacks.tracers.langchain import LangChainTracer
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/callbacks/tracers/langchain.py", line 11, in <module>
from langsmith import Client
File "/Users/Dzmitry_Kankalovich/Workspace/personal/langsmith.py", line 2, in <module>
from langchain.chat_models import ChatOpenAI
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/chat_models/__init__.py", line 20, in <module>
from langchain.chat_models.anthropic import ChatAnthropic
File "/Users/Dzmitry_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/chat_models/anthropic.py", line 3, in <module>
from langchain.callbacks.manager import (
ImportError: cannot import name 'AsyncCallbackManagerForLLMRun' from partially initialized module 'langchain.callbacks.manager' (most likely due to a circular import) (/Users/Dzmitr
y_Kankalovich/Workspace/personal/.venv/lib/python3.11/site-packages/langchain/callbacks/manager.py)
```
### Expected behavior
No errors, llm reply received | LangChain 0.0.250 circular import bug, possibly LangSmith related | https://api.github.com/repos/langchain-ai/langchain/issues/8680/comments | 2 | 2023-08-03T10:21:15Z | 2023-08-03T19:40:14Z | https://github.com/langchain-ai/langchain/issues/8680 | 1,834,771,152 | 8,680 |
[
"hwchase17",
"langchain"
]
| ### System Info
Related [Stackoverflow](https://stackoverflow.com/questions/76414862/how-do-you-catch-the-duplicate-id-error-when-using-langchain-vectorstores-chroma) post. Is there any way to avoid duplicated IDs and only insert new ones.
@agola11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
Chroma.from_documents(docs, embeddings, ids=ids, persist_directory='db')
```
### Expected behavior
Non-duplicate IDs get added | Handle duplicate IDs error when using langchain.vectorstores.Chroma.from_documents | https://api.github.com/repos/langchain-ai/langchain/issues/8679/comments | 1 | 2023-08-03T09:07:10Z | 2023-11-14T16:07:04Z | https://github.com/langchain-ai/langchain/issues/8679 | 1,834,615,597 | 8,679 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.250
Experiencing error when importing Html2TextTransformer
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[79], line 1
----> 1 from langchain.utils.math import cosine_similarity
ModuleNotFoundError: No module named 'langchain.utils.math'; 'langchain.utils' is not a package
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_transformers import Html2TextTransformer
### Expected behavior
Importing successfully. | Unable to import Html2TextTransformer from langchain.document_transformers | https://api.github.com/repos/langchain-ai/langchain/issues/8678/comments | 2 | 2023-08-03T08:47:29Z | 2023-11-09T16:10:19Z | https://github.com/langchain-ai/langchain/issues/8678 | 1,834,583,165 | 8,678 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
https://python.langchain.com/docs/integrations/retrievers/merger_retriever
https://python.langchain.com/docs/modules/data_connection/retrievers/ensemble
seems like the same.
Both task a list of exist retriever and MERGE the output together.
### Idea or request for content:
Should describe the difference between the both. | DOC: What is the difference between LOTR(Merger Retriever) and Ensemble Retriever | https://api.github.com/repos/langchain-ai/langchain/issues/8677/comments | 1 | 2023-08-03T08:44:25Z | 2023-08-07T10:08:04Z | https://github.com/langchain-ai/langchain/issues/8677 | 1,834,575,870 | 8,677 |
[
"hwchase17",
"langchain"
]
| combine agents &vector stores - https://python.langchain.com/docs/modules/agents/how_to/agent_vectorstore
vectorstore agent - https://python.langchain.com/docs/integrations/toolkits/vectorstore
what is the diffeence between these two? | what is the difference between the combine agents &vector stores and vectorstore agent? | https://api.github.com/repos/langchain-ai/langchain/issues/8676/comments | 3 | 2023-08-03T08:40:56Z | 2023-11-15T16:07:22Z | https://github.com/langchain-ai/langchain/issues/8676 | 1,834,568,374 | 8,676 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hi guys, I wanted to ask you does anyone know where is the problem with this because Im getting an error: (Failed to calculate number of tokens, falling back to approximate count) after trying to make a request to open ai by using langchain and sql database connection. On the request I can see that there is a property with max_tokens: 256, is there a problem with the api key or pricing plan or anything related to the database size?
The version of langchain library that Im using is "langchain": "0.0.110",
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Im trying to chat with my database and Im getting the error above.
### Expected behavior
It is showing me an error (Failed to calculate number of tokens, falling back to approximate count) and it is taking so much time to load. | Failed to calculate number of tokens, falling back to approximate count | https://api.github.com/repos/langchain-ai/langchain/issues/8675/comments | 5 | 2023-08-03T08:17:16Z | 2023-11-12T16:06:29Z | https://github.com/langchain-ai/langchain/issues/8675 | 1,834,529,804 | 8,675 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I learn the langchain lessions, and found some error while running the jupyter code at
`https://learn.deeplearning.ai/langchain/lesson/4/chains`,
precisily, MULTI_PROMPT_ROUTER_TEMPLATE is not working well in the first run,
1)It raise the folloing error,
```
OutputParserException: Parsing text
{
"destination": "physics",
"next_inputs": "What is black body radiation?"
}
raised following error:
Got invalid return object. Expected markdown code snippet with JSON object, bu
```
Then I add additional example I/O (after <<OUTPUT>>) for it,
```
eg:
<< INPUT >>
"What is black body radiation?"
<< OUTPUT >>
\```json
{{{{
"destination": string \ name of the prompt to use or "DEFAULT"
"next_inputs": string \ a potentially modified version of the original input
}}}}
\```
```
It works, but still error at routing to default path as I ask it "Why does every cell in our body contain DNA?"
2) It tried to route to biology, but failed to get response.
```
ValueError: Received invalid destination chain name 'biology'
```
How can I get stable running result? Or,this issue is one of the flaws of current llm, which cannot ensure that all outputs are processed according to our specified style?
Thanks a lot.
### Suggestion:
_No response_ | RunError of langchain's tutorial in deeplearning.ai, L3-Chains | https://api.github.com/repos/langchain-ai/langchain/issues/8674/comments | 5 | 2023-08-03T07:58:27Z | 2024-07-23T20:25:09Z | https://github.com/langchain-ai/langchain/issues/8674 | 1,834,500,944 | 8,674 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
What does langchain's logo mean
### Suggestion:
_No response_ | What does langchain's logo mean | https://api.github.com/repos/langchain-ai/langchain/issues/8673/comments | 3 | 2023-08-03T07:17:23Z | 2023-12-02T07:37:01Z | https://github.com/langchain-ai/langchain/issues/8673 | 1,834,440,509 | 8,673 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hi, I'm trying to interact with HuggingFace model (NumbersStation/nsql-llama-2-7B) using manifest but when I try to to send it a prompt I'm getting the following error from the model:
`The following 'model_kwargs' are not used by the model: ['token_type_ids'] (note: typos in the generate arguments will also show up in this list)`
I tried to find out how to control the mode_kwargs in the ManifestWarpper but didn't find such.
That's how I configured the wrapper:
```
local_llm = ManifestWrapper(
client=manifest,
llm_kwargs={"temperature": 0.0, "max_tokens": 1024},
verbose=True
)
```
and that's how I ran the model using Manifest (on my local machine):
```
python3 -m manifest.api.app \
--model_type huggingface \
--model_generation_type text-generation \
--model_name_or_path nsql-llama-2-7B \
--device 0
```
Would like some help here :)
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Run the model locally using Manifest
2. connect to it using ManifestWrapper.
3. send a simple prompt with LLMChain.
### Expected behavior
Getting response from the model. | Getting model args exception while using ManifestWrapper | https://api.github.com/repos/langchain-ai/langchain/issues/8672/comments | 2 | 2023-08-03T07:07:08Z | 2023-11-09T16:12:42Z | https://github.com/langchain-ai/langchain/issues/8672 | 1,834,425,326 | 8,672 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I just want to set chat history for different user in ConversationBufferMemory, user can only get his own chathistory
this is my code:
**embeddings = OpenAIEmbeddings(model="text-embedding-ada-002", chunk_size=1000)
docsearch = Chroma.from_documents(split_docs, embeddings)
memory = ConversationBufferMemory(memory_key="chat_history")
qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="map_rerank",
retriever=docsearch.as_retriever(),memory=memory, return_source_documents=False)**
could anyone give me some advice please
### Suggestion:
_No response_ | how to set chat_history for different user | https://api.github.com/repos/langchain-ai/langchain/issues/8671/comments | 4 | 2023-08-03T07:06:27Z | 2023-11-30T08:19:42Z | https://github.com/langchain-ai/langchain/issues/8671 | 1,834,424,379 | 8,671 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am trying to send emails using the `ZapierNLAWrapper`. Though it shows emails have been sent successfully, I could not find sent emails in gmail.
```
from langchain.chains import TransformChain
from langchain.tools.zapier.tool import ZapierNLARunAction
from langchain.utilities.zapier import ZapierNLAWrapper
from langchain.agents import initialize_agent
from langchain.agents.agent_toolkits import ZapierToolkit
from langchain.agents import AgentType
zapier = ZapierNLAWrapper(zapier_nla_api_key="")
actions = zapier.list()
toolkit = ZapierToolkit.from_zapier_nla_wrapper(zapier)
agent = initialize_agent(toolkit.get_tools(), llm, agent="zero-shot-react-description", verbose=True)
agent.run(
f"The emails of the shortlisted candidates are the following: [email protected] and [email protected]. Their names are Arghya and Nilesh respectively. Send emails to them, In the body of the email, congratulate them on their selection and inform them about the next steps in the hiring process."
)
```
Here is the Verbose log:
```
> Entering new AgentExecutor chain...
I need to send emails to the shortlisted candidates. I should use the Gmail: Send Email tool.
Action: Gmail: Send Email
Action Input:
- Subject: "Congratulations on Your Selection"
- Body: "Dear [Candidate's Name],\n\nCongratulations on being shortlisted for the position! We are pleased to inform you that you have been selected to proceed to the next steps in the hiring process. We will be contacting you shortly with more details.\n\nBest regards,\n[Your Name]"
- To: "[email protected]"
- Cc: "[email protected]"
Observation: null
Thought:The email has been sent to the first candidate. I need to send the email to the second candidate now.
Action: Gmail: Send Email
Action Input:
- Subject: "Congratulations on Your Selection"
- Body: "Dear [Candidate's Name],\n\nCongratulations on being shortlisted for the position! We are pleased to inform you that you have been selected to proceed to the next steps in the hiring process. We will be contacting you shortly with more details.\n\nBest regards,\n[Your Name]"
- To: "[email protected]"
- Cc: "[email protected]"
Observation: null
Thought:Both emails have been sent successfully.
Final Answer: The emails have been sent to the shortlisted candidates.
> Finished chain.
```
### Suggestion:
_No response_ | Issue: < Emails are not being sent though it shows emails sent successfully > | https://api.github.com/repos/langchain-ai/langchain/issues/8667/comments | 1 | 2023-08-03T06:18:56Z | 2023-08-03T09:44:12Z | https://github.com/langchain-ai/langchain/issues/8667 | 1,834,353,463 | 8,667 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I'm writing to propose the integration of Redis Cache support for ChatModels within Langchain. Chat functionality is an essential and perhaps the most common scenario in many applications. Enhancing Langchain with Redis Cache support for ChatModels can bring about the following benefits.
### Motivation
1. Performance Improvement:
Utilizing Redis Cache can significantly reduce the latency, providing users with a faster and more responsive chat experience.
Caching chat messages and related data can lower the database load and enhance overall system performance.
2. Scalability:
Redis Cache can enable Langchain to efficiently manage large-scale real-time chat systems.
The support for various data structures in Redis enables sophisticated caching strategies, accommodating different chat scenarios and requirements.
3. Reliability:
Redis's replication and persistence features can offer a reliable caching layer that ensures data consistency and availability.
It supports various eviction policies that allow for a flexible and efficient utilization of memory.
4. Integration Ease:
Redis is widely used, well-documented, and has client libraries in most programming languages, making integration with Langchain straightforward.
The community around Redis is vibrant and active, which can be beneficial for ongoing support and enhancement.
### Your contribution
I am considering... | Redis Cache to support ChatModel | https://api.github.com/repos/langchain-ai/langchain/issues/8666/comments | 3 | 2023-08-03T05:50:39Z | 2024-04-26T15:14:01Z | https://github.com/langchain-ai/langchain/issues/8666 | 1,834,321,052 | 8,666 |
[
"hwchase17",
"langchain"
]
| ### System Info
* python3.9
* langchain 0.0242
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
m3e: HuggingFaceEmbeddings = SentenceTransformerEmbeddings(model_name='moka-ai/m3e-base')
vectorStore: VectorStore = Clickhouse(embedding=m3e, config=settings)
retriever = vectorStore.as_retriever(search_type='similarity_score_threshold', search_kwargs={'score_threshold': 20.0})
documents = retriever.get_relevant_documents('音乐')
```
retriever invoke `_get_relevant_documents in vectorstores/base.py`, and `docs_and_similarities` has doc and score.
I found `score_threshold` is not used in `similarity_search_with_relevance_scores`.
Such as pgvector, it is implement `similarity_search_with_score`, so retriever invoke
chain is `similarity_search_with_relevance_scores(base.py)` -> `_similarity_search_with_relevance_scores(base.py)` -> `similarity_search_with_score(pgvector)`. So, score filter will be working in `similarity_search_with_relevance_scores(base.py)`.
But Clickhouse vectorstore return from `similarity_search_with_relevance_scores` and without score filter because clickhouse implement `similarity_search_with_relevance_scores` from `base.py`.
---
I have two suggestions for this issue.
1. use `score_threshold` in `similarity_search_with_relevance_scores`
```sql
SELECT document,
metadata, dist
FROM default.langchain
WHERE dist > {score_threshold}
ORDER BY L2Distance(embedding, [....])
AS dist ASC
LIMIT 4
```
2. rename function `similarity_search_with_relevance_scores` to `_similarity_search_with_relevance_scores`.
make invoke chain to `similarity_search_with_relevance_scores(base.py)` -> `_similarity_search_with_relevance_scores(clickhouse)`
got data that score less than score_threshold
### Expected behavior
score filtering normally in clickhouse
BTW, I can submit pr if necessary | ClickHouse VectorStore score_threshold not working. | https://api.github.com/repos/langchain-ai/langchain/issues/8664/comments | 2 | 2023-08-03T04:44:00Z | 2023-11-03T02:35:28Z | https://github.com/langchain-ai/langchain/issues/8664 | 1,834,261,445 | 8,664 |
[
"hwchase17",
"langchain"
]
| ### System Info
python 3.10
langchain 0.0.249
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
loader = DirectoryLoader("content/zh/knowledge/", show_progress=True,loader_cls=TextLoader)
raw_documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=500,
chunk_overlap=0,
separators=['\n\n','\n','。','?','!','|']
)
# embeddings = OpenAIEmbeddings(model="text-embedding-ada-002")
embeddings = HuggingFaceEmbeddings(model_name="moka-ai/m3e-base")
count = int(len(raw_documents) / 100) +1
for index in range(count):
print('当前:%s' % index)
documents = text_splitter.split_documents(raw_documents[index*100:min((index+1)*100,len(raw_documents))-1])
vectorstore = FAISS.from_documents(documents=documents, embedding=embeddings,kwargs={"distance_strategy":"MAX_INNER_PRODUCT"})
# Save vectorstore
with open("vectorstore.pkl", "wb") as f:
pickle.dump(vectorstore, f)
### Expected behavior
Use FAISS kwargs={"distance_strategy":"MAX_INNER_PRODUCT"} Error! Not work!
<img width="1512" alt="image" src="https://github.com/langchain-ai/langchain/assets/4583537/fee0f9b9-49bc-4713-adb9-de8c9be76606">
| Use FAISS {"distance_strategy":"MAX_INNER_PRODUCT"} Not work! | https://api.github.com/repos/langchain-ai/langchain/issues/8662/comments | 2 | 2023-08-03T03:57:32Z | 2023-08-03T06:53:06Z | https://github.com/langchain-ai/langchain/issues/8662 | 1,834,227,144 | 8,662 |
[
"hwchase17",
"langchain"
]
| ### System Info
```
llama_print_timings: load time = 127.16 ms
llama_print_timings: sample time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
llama_print_timings: prompt eval time = 1753.93 ms / 237 tokens ( 7.40 ms per token, 135.13 tokens per second)
llama_print_timings: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
llama_print_timings: total time = 1779.79 ms
GGML_ASSERT: /tmp/pip-install-7c_dsjad/llama-cpp-python_04ef08a182034167b5fee4a62dd2cdc2/vendor/llama.cpp/ggml-cuda.cu:3572: src0->type == GGML_TYPE_F16
中止 (核心已傾印)
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [x] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
vectorstore = Chroma.from_documents(documents=all_splits, embedding=LlamaCppEmbeddings(model_path="/home/mefae1/llm/chinese-alpaca-2-7b/ggml-model-q4_0.bin",
n_gpu_layers=40,
n_batch=16,
n_ctx=2048
))
```
### Expected behavior
return vectorstore and not error | LlamaCppEmbeddings Error `type == GGML_TYPE_F16` | https://api.github.com/repos/langchain-ai/langchain/issues/8660/comments | 3 | 2023-08-03T03:15:23Z | 2023-11-09T16:13:07Z | https://github.com/langchain-ai/langchain/issues/8660 | 1,834,199,574 | 8,660 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
```
# need to use GPT-4 here as GPT-3.5 does not understand, however hard you insist, that
# it should use the calculator to perform the final calculation
llm = ChatOpenAI(temperature=0, model="gpt-4")
```
This docs say that it need gpt -4,are there other models that can be used?For example, some open source large models LLM。H
### Idea or request for content:
_No response_ | DOC: <Running Agent as an Iterator>' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/8656/comments | 1 | 2023-08-03T01:19:51Z | 2023-11-09T16:11:04Z | https://github.com/langchain-ai/langchain/issues/8656 | 1,834,120,844 | 8,656 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
The documentation links seems to be broken, the Agents Use-cases links are no longer working and giving "Page Not Found".
Example Links - https://python.langchain.com/docs/use_cases/autonomous_agents/baby_agi_with_agent.html
### Idea or request for content:
_No response_ | DOC: Agent Usecase Links are broken | https://api.github.com/repos/langchain-ai/langchain/issues/8649/comments | 1 | 2023-08-02T22:11:00Z | 2023-11-08T16:06:39Z | https://github.com/langchain-ai/langchain/issues/8649 | 1,833,968,332 | 8,649 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Following is the link of langchain documents. In this I need to integrate OPENAI_FUNCTIONS
https://python.langchain.com/docs/modules/agents/how_to/custom_llm_chat_agent
### Suggestion:
_No response_ | Integrate OPENAI_FUNCTIONS in Agent | https://api.github.com/repos/langchain-ai/langchain/issues/8648/comments | 1 | 2023-08-02T22:02:54Z | 2023-11-08T16:06:44Z | https://github.com/langchain-ai/langchain/issues/8648 | 1,833,961,567 | 8,648 |
[
"hwchase17",
"langchain"
]
| ### Feature request
An implementation of a `FakeEmbeddingModel` that generates identical vectors given identical input texts.
### Motivation
Currently langchain has a `FakeEmbedding` model that generates a vector of random numbers, that is irrelevant to the content that needs to be embedded.
This model is pretty useful in e.g., unit tests, because it doesn't need to load any actual models, or connect to the internet.
However, there is one realistic feature it misses, which makes it "fake" compared to a realistic embedding model -- given input texts are the same, the generated embedding vectors should be the same as well.
This will make unit tests that involve using a `FakeEmbeddingModel` more realistic. .E.g., ou can test if the similarity search makes sense -- searching by the exact identical text from your vector store should give me an exact match, because the embeddings are identical, which gives a distance of 0.
### Your contribution
I can submit a PR for it.
We can use the hash of the text as a seed when generating random numbers.
In this case, if input texts are identical, we should get exactly the same embedding vectors. | Deterministic fake embedding model | https://api.github.com/repos/langchain-ai/langchain/issues/8644/comments | 2 | 2023-08-02T18:36:33Z | 2023-08-04T17:51:01Z | https://github.com/langchain-ai/langchain/issues/8644 | 1,833,717,208 | 8,644 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Current GraphQL Tool is very static https://python.langchain.com/docs/modules/agents/tools/integrations/graphql, one has to set up graphql url, headers etc at the time of configuring the tool.
This tool should follow [Request*Tool](https://github.com/hwchase17/langchain/blob/2b3da522eb866cb79dfdc9116a7c466b866cb3d0/langchain/tools/requests/tool.py#L53), where individual request information is sent as input to this tool. This way, GraphQL tool will become a generic and could be used to query different endpoints, or with different headers
This issue was wrongly created in the l[angchainjs repo](https://github.com/hwchase17/langchainjs/issues/1713) instead of the python langchain repo (this one).
### Motivation
We have a use case where the URL and headers would change between invocation of this tool. And these could be very dynamic, every user might have a different "Authorization" header. So we want to register the tool once to the agent, but override these values at the time of invocation.
### Your contribution
Yes, my team is willing to contribute.
Here is the pull request https://github.com/langchain-ai/langchain/pull/8616 | GraphQL Execution Tool should support URL/headers as input at runtime | https://api.github.com/repos/langchain-ai/langchain/issues/8638/comments | 2 | 2023-08-02T17:02:44Z | 2023-11-08T16:06:50Z | https://github.com/langchain-ai/langchain/issues/8638 | 1,833,584,670 | 8,638 |
[
"hwchase17",
"langchain"
]
| ### Feature request
When connecting to ChatGPT API, it is possible to get responses by the chatbot as "As an AI model...", which ruins the chatbot experience on your application.
A way to filter this and generate a default response might be better.
### Motivation
For applications using OpenAI API, this would enable filtering those requests.
### Your contribution
Not really | Catch and Filter OpenAI "As an AI model" responses | https://api.github.com/repos/langchain-ai/langchain/issues/8637/comments | 6 | 2023-08-02T16:50:53Z | 2024-02-12T16:16:29Z | https://github.com/langchain-ai/langchain/issues/8637 | 1,833,568,563 | 8,637 |
[
"hwchase17",
"langchain"
]
| ### Feature request
It would be great if there was an explicit way to support parallel upserts in the `Pinecone` vectorstore wrapper. This could either be done by altering the existing `add_texts` method or adding a new "async" equivalent.
### Motivation
The native Pinecone client supports [parallel upsert](https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel) operations which can improve throughput. This is technically supported with the current `langchain` vectorstore wrapper using the following combination of arguments `index.add_texts(texts, async_req=True, batch_size=None)`.
However, because `add_texts` [returns](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/pinecone.py#L102-L105) the list of `ids` instead of the response to `self._index.upsert`, we cannot call `get` to suspend the program until all of the responses are available (see code snippet from the Pinecone docs below).
```
# Upsert data with 100 vectors per upsert request asynchronously
# - Create pinecone.Index with pool_threads=30 (limits to 30 simultaneous requests)
# - Pass async_req=True to index.upsert()
with pinecone.Index('example-index', pool_threads=30) as index:
# Send requests in parallel
async_results = [
index.upsert(vectors=ids_vectors_chunk, async_req=True)
for ids_vectors_chunk in chunks(example_data_generator, batch_size=100)
]
# Wait for and retrieve responses (this raises in case of error)
[async_result.get() for async_result in async_results]
```
### Your contribution
I can try to put a PR up for this. Although I wonder if there was a deliberate decision to not return the raw response for [`upsert`](https://github.com/pinecone-io/pinecone-python-client/blob/main/pinecone/index.py#L73-L78) in order to have a cleaner abstraction. | Support parallel upserts in Pinecone | https://api.github.com/repos/langchain-ai/langchain/issues/8635/comments | 1 | 2023-08-02T15:54:23Z | 2023-09-05T14:59:11Z | https://github.com/langchain-ai/langchain/issues/8635 | 1,833,455,388 | 8,635 |
[
"hwchase17",
"langchain"
]
| ### System Info
OS: Windows
Name: langchain
Version: 0.0.249
Python 3.11.2
### Who can help?
@hw
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When loading the pubmed Tool it has to be called with "pupmed". Though this was possibly a typo since pupmed isn't mentioned in the documents.
### Expected behavior
Expect to call the pubmed tool with "pubmed" and not "pupmed" or documentation should be more clear. | Pubmed needs to be called with "pupo | https://api.github.com/repos/langchain-ai/langchain/issues/8631/comments | 2 | 2023-08-02T14:22:29Z | 2023-11-08T16:06:55Z | https://github.com/langchain-ai/langchain/issues/8631 | 1,833,291,208 | 8,631 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi!
How to make a Chatbot that uses its own data access to the internet to get more info (like new updated)? I've tried and searched everywhere but can't make it work.
Here the code:
`
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain.vectorstores import DocArrayInMemorySearch
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.document_loaders import (
UnstructuredWordDocumentLoader,
TextLoader,
UnstructuredPowerPointLoader,
)
from langchain.tools import Tool
from langchain.utilities import GoogleSearchAPIWrapper
from langchain.chat_models import ChatOpenAI
import os
import openai
import sys
from dotenv import load_dotenv, find_dotenv
sys.path.append('../..')
_ = load_dotenv(find_dotenv()) # read local .env file
google_api_key = os.environ.get("GOOGLE_API_KEY")
google_cse_id = os.environ.get("GOOGLE_CSE_ID")
openai.api_key = os.environ['OPENAI_API_KEY']
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_ENDPOINT"] = "https://api.langchain.plus"
os.environ["LANGCHAIN_API_KEY"] = os.environ['LANGCHAIN_API_KEY']
os.environ["GOOGLE_API_KEY"] = google_api_key
os.environ["GOOGLE_CSE_ID"] = google_cse_id
folder_path_docx = "DB\\DB VARIADO\\DOCS"
folder_path_txt = " DB\\BLOG-POSTS"
folder_path_pptx_1 = "DB\\PPT JUNIO"
folder_path_pptx_2 = "DB\\DB VARIADO\\PPTX"
loaded_content = []
for file in os.listdir(folder_path_docx):
if file.endswith(".docx"):
file_path = os.path.join(folder_path_docx, file)
loader = UnstructuredWordDocumentLoader(file_path)
docx = loader.load()
loaded_content.extend(docx)
for file in os.listdir(folder_path_txt):
if file.endswith(".txt"):
file_path = os.path.join(folder_path_txt, file)
loader = TextLoader(file_path, encoding='utf-8')
text = loader.load()
loaded_content.extend(text)
for file in os.listdir(folder_path_pptx_1):
if file.endswith(".pptx"):
file_path = os.path.join(folder_path_pptx_1, file)
loader = UnstructuredPowerPointLoader(file_path)
slides_1 = loader.load()
loaded_content.extend(slides_1)
for file in os.listdir(folder_path_pptx_2):
if file.endswith(".pptx"):
file_path = os.path.join(folder_path_pptx_2, file)
loader = UnstructuredPowerPointLoader(file_path)
slides_2 = loader.load()
loaded_content.extend(slides_2)
embedding = OpenAIEmbeddings()
embeddings_content = []
for one_loaded_content in loaded_content:
embedding_content = embedding.embed_query(one_loaded_content.page_content)
embeddings_content.append(embedding_content)
db = DocArrayInMemorySearch.from_documents(loaded_content, embedding)
retriever = db.as_retriever(search_type="similarity", search_kwargs={"k": 3})
search = GoogleSearchAPIWrapper()
def custom_search(query):
max_results = 3
internet_results = search.run(query)[:max_results]
return internet_results
chain = ConversationalRetrievalChain.from_llm(
llm=ChatOpenAI(model_name="gpt-4", temperature=0),
chain_type="map_reduce",
retriever=retriever,
return_source_documents=True,
return_generated_question=True,
)
history = []
while True:
query = input("Hola, soy Chatbot. ¿Qué te gustaría saber? ")
internet_results = custom_search(query)
combined_results = loaded_content + [internet_results]
response = chain(
{"question": query, "chat_history": history, "documents": combined_results})
print(response["answer"])
history.append(("system", query))
history.append(("assistant", response["answer"]))
`
This is the error message I get: "The document does not provide information on... ". So it seems it doesn't access to the internet or something else (?)
Really appreciate your suggestion or your help!
### Suggestion:
_No response_ | How to connect a Chatbot that has its own data but has also access to internet for search? | https://api.github.com/repos/langchain-ai/langchain/issues/8625/comments | 9 | 2023-08-02T11:32:47Z | 2024-07-25T13:33:09Z | https://github.com/langchain-ai/langchain/issues/8625 | 1,833,002,315 | 8,625 |
[
"hwchase17",
"langchain"
]
| ### Feature request
When I use RetrievalQA, I need to add and reorder the content retrieved by retriever
```
qa = RetrievalQA.from_chain_type(llm=chat,
chain_type="stuff",
retriever=docsearch.as_retriever(search_kwargs={"k": 2}),
chain_type_kwargs={"prompt": PROMPT,"memory":memory })
```
I want the retrieved content from retriever to be reordered by "source".
The following code is used by me to manipulate the retrieved content to achieve my needs.
```
retriever = docsearch.as_retriever(search_kwargs={"k": 6})
search_result = retriever.get_relevant_documents(quey)
def sort_paragraphs(paragraphs):
sorted_paragraphs = sorted(paragraphs, key=lambda x: x.metadata["source"])
return sorted_paragraphs
paragraphs = sort_paragraphs(search_result)
sorted_paragraphs = ""
for i in paragraphs:
sorted_paragraphs = sorted_paragraphs + i.page_content + "\n"
```
How can I define an retriever that I can use in my chain to organize the search content as I wish
### Motivation
Some custom retrieval logic is required to seek optimal performance.
### Your contribution
I would like to have tutorials related to custom retriever documentation
@hwchase17
https://twitter.com/hwchase17/status/1646272240202432512
I think langchain needs to be a custom retriever
| How to create a custom retriever | https://api.github.com/repos/langchain-ai/langchain/issues/8623/comments | 12 | 2023-08-02T10:30:14Z | 2024-04-02T09:29:33Z | https://github.com/langchain-ai/langchain/issues/8623 | 1,832,908,335 | 8,623 |
[
"hwchase17",
"langchain"
]
| ### System Info
The class `MarkdownHeaderTextSplitter` is not a `TextSplitter`, and must implements all the corresponding methods.
```
class MarkdownHeaderTextSplitter:
...
```
@hwchase17 @eyurtsev
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
headers_to_split_on = [
("#", "Header 1"),
("##", "Header 2"),
("###", "Header 3"),
]
splitter = MarkdownHeaderTextSplitter(
headers_to_split_on=headers_to_split_on
)
splitter.split_documents(docs)
### Expected behavior
Accept the call | MarkdownHeaderTextSplitter is not a TextSplitter | https://api.github.com/repos/langchain-ai/langchain/issues/8620/comments | 6 | 2023-08-02T08:06:38Z | 2023-11-10T07:39:18Z | https://github.com/langchain-ai/langchain/issues/8620 | 1,832,660,433 | 8,620 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Recently Microsoft have announced their [first iteration of running Llama using Onnx format](https://github.com/microsoft/Llama-2-Onnx/tree/main). Hence it will be awesome if LangChain comes up with an early support for Onnx runtime models.
### Motivation
There are two reasons for this.
1. Onnx has always been the standard for running inference in CPU / GPU (Onnx GPU), so this idea of providing LLMs supported for Onnx runtime format will move forward fast
2. Current implementations of running the same is an overhead, LangChain can provide the abstraction easily.
### Your contribution
I can try starting out to experiment with this of whether we can implement or not by using the existing LLM interface. However the bottleneck here becomes the .onnx format weights which are to be requested to Microsoft. I filled out the application, waiting for approval. Let me know if we can work on this issue. | Langchain Support for Onnx Llama | https://api.github.com/repos/langchain-ai/langchain/issues/8619/comments | 5 | 2023-08-02T08:03:06Z | 2024-03-13T19:56:57Z | https://github.com/langchain-ai/langchain/issues/8619 | 1,832,655,316 | 8,619 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.249
Python 3.11.2
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I follow the egs of openai_functions_agent at
https://python.langchain.com/docs/modules/agents/
https://python.langchain.com/docs/modules/agents/agent_types/openai_functions_agent
https://python.langchain.com/docs/modules/agents/how_to/custom-functions-with-openai-functions-agent
```
from langchain.chat_models import ChatOpenAI
from langchain.schema import SystemMessage
from langchain.agents import OpenAIFunctionsAgent
from langchain.agents import tool
from langchain.agents import AgentExecutor
llm = ChatOpenAI(temperature=0)
@tool
def get_word_length(word: str) -> int:
"""Returns the length of a word."""
import pdb # never go in here
pdb.set_trace()
return len(word)
tools = [get_word_length]
system_message = SystemMessage(content="You are very powerful assistant, but bad at calculating lengths of words")
prompt = OpenAIFunctionsAgent.create_prompt(system_message=system_message)
agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.run("how many letters in the word educa?")
```
But the openai_functions_agent do not invoke the tool as expected。
```
> Entering new AgentExecutor chain...
[{'role': 'system', 'content': 'You are very powerful assistant, but bad at calculating lengths of words'}, {'role': 'user', 'content': 'how many letters in the word educa?'}]
There are 5 letters in the word "educa".
> Finished chain.
```
I tried using other tools but they were not used . It seems that OPENAI_FUNCTIONS has bugs。
### Expected behavior
AgentType.OPENAI_FUNCTIONS works as docs show | AgentType.OPENAI_FUNCTIONS did not work | https://api.github.com/repos/langchain-ai/langchain/issues/8618/comments | 2 | 2023-08-02T07:02:13Z | 2023-11-08T16:07:05Z | https://github.com/langchain-ai/langchain/issues/8618 | 1,832,564,675 | 8,618 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain: 0.0.246
python: 3.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I want to get a summary from chat history between ai-bot and user.
- ChatOpenAI model gpt-3.5-turbo
- LLMChain
But got an Error "openai.error.InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?"
My Code
`
memory = get_memory_from_chat_prompt(messages)
prompt = PromptTemplate(
input_variables=SUMMARIZE_INPUT_VARIABLES,
template=SUMMARIZE_CHAT_TEMPLATE
)
llm = ChatOpenAI(temperature=.1, model_name='gpt-3.5-turbo', max_tokens=1000)
human_input="""Please provide a summary. You can include the key points or main findings in a concise manner."""
answer_chain = LLMChain(llm=llm, prompt=prompt, memory=memory)
result = answer_chain.predict(human_input=human_input)
`
Error:
`
File "/Users/congle/working/bamboo/bot/app/package/aianswer.py", line 78, in get_summarize_from_messages
result = answer_chain.predict(human_input=human_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/chains/llm.py", line 252, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/chains/base.py", line 260, in __call__
final_outputs: Dict[str, Any] = self.prep_outputs(
^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/chains/base.py", line 354, in prep_outputs
self.memory.save_context(inputs, outputs)
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/memory/summary_buffer.py", line 60, in save_context
self.prune()
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/memory/summary_buffer.py", line 71, in prune
self.moving_summary_buffer = self.predict_new_summary(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/memory/summary.py", line 37, in predict_new_summary
return chain.predict(summary=existing_summary, new_lines=new_lines)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/chains/llm.py", line 252, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/chains/base.py", line 258, in __call__
raise e
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/chains/base.py", line 252, in __call__
self._call(inputs, run_manager=run_manager)
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/chains/llm.py", line 92, in _call
response = self.generate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/chains/llm.py", line 102, in generate
return self.llm.generate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/llms/base.py", line 451, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/llms/base.py", line 582, in generate
output = self._generate_helper(
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/llms/base.py", line 488, in _generate_helper
raise e
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/llms/base.py", line 475, in _generate_helper
self._generate(
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/llms/openai.py", line 400, in _generate
response = completion_with_retry(
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/llms/openai.py", line 116, in completion_with_retry
return _completion_with_retry(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/tenacity/__init__.py", line 314, in iter
return fut.result()
^^^^^^^^^^^^
File "/opt/homebrew/Cellar/[email protected]/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/[email protected]/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/llms/openai.py", line 114, in _completion_with_retry
return llm.client.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/openai/api_resources/completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/openai/api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/openai/api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?
`
### Expected behavior
I expect to get summary from chat history by model gpt-3.5. Thanks for your help! | Invalid model when use chain | https://api.github.com/repos/langchain-ai/langchain/issues/8613/comments | 0 | 2023-08-02T05:17:12Z | 2023-08-02T07:13:12Z | https://github.com/langchain-ai/langchain/issues/8613 | 1,832,449,850 | 8,613 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
I am following the instruction from this doc:
DOC: Structure answers with OpenAI functions
-https://python.langchain.com/docs/use_cases/question_answering/integrations/openai_functions_retrieval_qa
But this gives some error.
# Code
```python
from langchain.chains import RetrievalQA
from langchain.document_loaders import TextLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.chat_models import AzureChatOpenAI
from langchain.chat_models import ChatOpenAI
from langchain.chains.combine_documents.stuff import StuffDocumentsChain
from langchain.prompts import PromptTemplate
from langchain.chains import create_qa_with_sources_chain
path_txt = r"C:\Users\a126291\OneDrive - AmerisourceBergen(ABC)\data\langchain\state_of_the_union.txt"
def get_config_dict():
import os
import yaml
with open(os.path.expanduser('~/.config/config.yaml')) as fh:
config = yaml.safe_load(fh)
# openai
keys = ["OPENAI_API_KEY","OPENAI_API_TYPE","OPENAI_API_BASE","OPENAI_API_VERSION"]
for key in keys:
os.environ[key] = config.get(key)
return config
config = get_config_dict()
#========= qa chain
loader = TextLoader(path_txt, encoding="utf-8")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
for i, text in enumerate(texts):
text.metadata["source"] = f"{i}-pl"
embeddings = OpenAIEmbeddings(model="text-embedding-ada-002", chunk_size=1)
docsearch = Chroma.from_documents(texts, embeddings)
# vectorstore = Chroma.from_documents(texts, embeddings)
# retriever = vectorstore.as_retriever()
llm = AzureChatOpenAI(**config['kw_azure_llm'],temperature=0.4)
#------- query
qa_chain = create_qa_with_sources_chain(llm)
doc_prompt = PromptTemplate(
template="Content: {page_content}\nSource: {source}",
input_variables=["page_content", "source"],
)
final_qa_chain = StuffDocumentsChain(
llm_chain=qa_chain,
document_variable_name="context",
document_prompt=doc_prompt,
)
retrieval_qa = RetrievalQA(
retriever=docsearch.as_retriever(), combine_documents_chain=final_qa_chain
)
query = "What did the president say about russia"
retrieval_qa.run(query)
```
# Error: InvalidRequestError: Unrecognized request arguments supplied: function_call, functions
```bash
---------------------------------------------------------------------------
InvalidRequestError Traceback (most recent call last)
Cell In[36], line 69
64 retrieval_qa = RetrievalQA(
65 retriever=docsearch.as_retriever(), combine_documents_chain=final_qa_chain
66 )
68 query = "What did the president say about russia"
---> 69 retrieval_qa.run(query)
File ~\venv\py311openai\Lib\site-packages\langchain\chains\base.py:290, in Chain.run(self, callbacks, tags, *args, **kwargs)
288 if len(args) != 1:
289 raise ValueError("`run` supports only one positional argument.")
--> 290 return self(args[0], callbacks=callbacks, tags=tags)[_output_key]
292 if kwargs and not args:
293 return self(kwargs, callbacks=callbacks, tags=tags)[_output_key]
File ~\venv\py311openai\Lib\site-packages\langchain\chains\base.py:166, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
--> 166 raise e
167 run_manager.on_chain_end(outputs)
168 final_outputs: Dict[str, Any] = self.prep_outputs(
169 inputs, outputs, return_only_outputs
170 )
File ~\venv\py311openai\Lib\site-packages\langchain\chains\base.py:160, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
154 run_manager = callback_manager.on_chain_start(
155 dumpd(self),
156 inputs,
157 )
158 try:
159 outputs = (
--> 160 self._call(inputs, run_manager=run_manager)
161 if new_arg_supported
162 else self._call(inputs)
163 )
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
File ~\venv\py311openai\Lib\site-packages\langchain\chains\retrieval_qa\base.py:120, in BaseRetrievalQA._call(self, inputs, run_manager)
117 question = inputs[self.input_key]
119 docs = self._get_docs(question)
--> 120 answer = self.combine_documents_chain.run(
121 input_documents=docs, question=question, callbacks=_run_manager.get_child()
122 )
124 if self.return_source_documents:
125 return {self.output_key: answer, "source_documents": docs}
File ~\venv\py311openai\Lib\site-packages\langchain\chains\base.py:293, in Chain.run(self, callbacks, tags, *args, **kwargs)
290 return self(args[0], callbacks=callbacks, tags=tags)[_output_key]
292 if kwargs and not args:
--> 293 return self(kwargs, callbacks=callbacks, tags=tags)[_output_key]
295 if not kwargs and not args:
296 raise ValueError(
297 "`run` supported with either positional arguments or keyword arguments,"
298 " but none were provided."
299 )
File ~\venv\py311openai\Lib\site-packages\langchain\chains\base.py:166, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
--> 166 raise e
167 run_manager.on_chain_end(outputs)
168 final_outputs: Dict[str, Any] = self.prep_outputs(
169 inputs, outputs, return_only_outputs
170 )
File ~\venv\py311openai\Lib\site-packages\langchain\chains\base.py:160, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
154 run_manager = callback_manager.on_chain_start(
155 dumpd(self),
156 inputs,
157 )
158 try:
159 outputs = (
--> 160 self._call(inputs, run_manager=run_manager)
161 if new_arg_supported
162 else self._call(inputs)
163 )
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
File ~\venv\py311openai\Lib\site-packages\langchain\chains\combine_documents\base.py:84, in BaseCombineDocumentsChain._call(self, inputs, run_manager)
82 # Other keys are assumed to be needed for LLM prediction
83 other_keys = {k: v for k, v in inputs.items() if k != self.input_key}
---> 84 output, extra_return_dict = self.combine_docs(
85 docs, callbacks=_run_manager.get_child(), **other_keys
86 )
87 extra_return_dict[self.output_key] = output
88 return extra_return_dict
File ~\venv\py311openai\Lib\site-packages\langchain\chains\combine_documents\stuff.py:87, in StuffDocumentsChain.combine_docs(self, docs, callbacks, **kwargs)
85 inputs = self._get_inputs(docs, **kwargs)
86 # Call predict on the LLM.
---> 87 return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
File ~\venv\py311openai\Lib\site-packages\langchain\chains\llm.py:252, in LLMChain.predict(self, callbacks, **kwargs)
237 def predict(self, callbacks: Callbacks = None, **kwargs: Any) -> str:
238 """Format prompt with kwargs and pass to LLM.
239
240 Args:
(...)
250 completion = llm.predict(adjective="funny")
251 """
--> 252 return self(kwargs, callbacks=callbacks)[self.output_key]
File ~\venv\py311openai\Lib\site-packages\langchain\chains\base.py:166, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
--> 166 raise e
167 run_manager.on_chain_end(outputs)
168 final_outputs: Dict[str, Any] = self.prep_outputs(
169 inputs, outputs, return_only_outputs
170 )
File ~\venv\py311openai\Lib\site-packages\langchain\chains\base.py:160, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
154 run_manager = callback_manager.on_chain_start(
155 dumpd(self),
156 inputs,
157 )
158 try:
159 outputs = (
--> 160 self._call(inputs, run_manager=run_manager)
161 if new_arg_supported
162 else self._call(inputs)
163 )
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
File ~\venv\py311openai\Lib\site-packages\langchain\chains\llm.py:92, in LLMChain._call(self, inputs, run_manager)
87 def _call(
88 self,
89 inputs: Dict[str, Any],
90 run_manager: Optional[CallbackManagerForChainRun] = None,
91 ) -> Dict[str, str]:
---> 92 response = self.generate([inputs], run_manager=run_manager)
93 return self.create_outputs(response)[0]
File ~\venv\py311openai\Lib\site-packages\langchain\chains\llm.py:102, in LLMChain.generate(self, input_list, run_manager)
100 """Generate LLM result from inputs."""
101 prompts, stop = self.prep_prompts(input_list, run_manager=run_manager)
--> 102 return self.llm.generate_prompt(
103 prompts,
104 stop,
105 callbacks=run_manager.get_child() if run_manager else None,
106 **self.llm_kwargs,
107 )
File ~\venv\py311openai\Lib\site-packages\langchain\chat_models\base.py:167, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
159 def generate_prompt(
160 self,
161 prompts: List[PromptValue],
(...)
164 **kwargs: Any,
165 ) -> LLMResult:
166 prompt_messages = [p.to_messages() for p in prompts]
--> 167 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File ~\venv\py311openai\Lib\site-packages\langchain\chat_models\base.py:102, in BaseChatModel.generate(self, messages, stop, callbacks, tags, **kwargs)
100 except (KeyboardInterrupt, Exception) as e:
101 run_manager.on_llm_error(e)
--> 102 raise e
103 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
104 generations = [res.generations for res in results]
File ~\venv\py311openai\Lib\site-packages\langchain\chat_models\base.py:94, in BaseChatModel.generate(self, messages, stop, callbacks, tags, **kwargs)
90 new_arg_supported = inspect.signature(self._generate).parameters.get(
91 "run_manager"
92 )
93 try:
---> 94 results = [
95 self._generate(m, stop=stop, run_manager=run_manager, **kwargs)
96 if new_arg_supported
97 else self._generate(m, stop=stop)
98 for m in messages
99 ]
100 except (KeyboardInterrupt, Exception) as e:
101 run_manager.on_llm_error(e)
File ~\venv\py311openai\Lib\site-packages\langchain\chat_models\base.py:95, in <listcomp>(.0)
90 new_arg_supported = inspect.signature(self._generate).parameters.get(
91 "run_manager"
92 )
93 try:
94 results = [
---> 95 self._generate(m, stop=stop, run_manager=run_manager, **kwargs)
96 if new_arg_supported
97 else self._generate(m, stop=stop)
98 for m in messages
99 ]
100 except (KeyboardInterrupt, Exception) as e:
101 run_manager.on_llm_error(e)
File ~\venv\py311openai\Lib\site-packages\langchain\chat_models\openai.py:359, in ChatOpenAI._generate(self, messages, stop, run_manager, **kwargs)
351 message = _convert_dict_to_message(
352 {
353 "content": inner_completion,
(...)
356 }
357 )
358 return ChatResult(generations=[ChatGeneration(message=message)])
--> 359 response = self.completion_with_retry(messages=message_dicts, **params)
360 return self._create_chat_result(response)
File ~\venv\py311openai\Lib\site-packages\langchain\chat_models\openai.py:307, in ChatOpenAI.completion_with_retry(self, **kwargs)
303 @retry_decorator
304 def _completion_with_retry(**kwargs: Any) -> Any:
305 return self.client.create(**kwargs)
--> 307 return _completion_with_retry(**kwargs)
File ~\venv\py311openai\Lib\site-packages\tenacity\__init__.py:289, in BaseRetrying.wraps.<locals>.wrapped_f(*args, **kw)
287 @functools.wraps(f)
288 def wrapped_f(*args: t.Any, **kw: t.Any) -> t.Any:
--> 289 return self(f, *args, **kw)
File ~\venv\py311openai\Lib\site-packages\tenacity\__init__.py:379, in Retrying.__call__(self, fn, *args, **kwargs)
377 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
378 while True:
--> 379 do = self.iter(retry_state=retry_state)
380 if isinstance(do, DoAttempt):
381 try:
File ~\venv\py311openai\Lib\site-packages\tenacity\__init__.py:314, in BaseRetrying.iter(self, retry_state)
312 is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain)
313 if not (is_explicit_retry or self.retry(retry_state)):
--> 314 return fut.result()
316 if self.after is not None:
317 self.after(retry_state)
File ~\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\_base.py:449, in Future.result(self, timeout)
447 raise CancelledError()
448 elif self._state == FINISHED:
--> 449 return self.__get_result()
451 self._condition.wait(timeout)
453 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
File ~\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\_base.py:401, in Future.__get_result(self)
399 if self._exception:
400 try:
--> 401 raise self._exception
402 finally:
403 # Break a reference cycle with the exception in self._exception
404 self = None
File ~\venv\py311openai\Lib\site-packages\tenacity\__init__.py:382, in Retrying.__call__(self, fn, *args, **kwargs)
380 if isinstance(do, DoAttempt):
381 try:
--> 382 result = fn(*args, **kwargs)
383 except BaseException: # noqa: B902
384 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
File ~\venv\py311openai\Lib\site-packages\langchain\chat_models\openai.py:305, in ChatOpenAI.completion_with_retry.<locals>._completion_with_retry(**kwargs)
303 @retry_decorator
304 def _completion_with_retry(**kwargs: Any) -> Any:
--> 305 return self.client.create(**kwargs)
File ~\venv\py311openai\Lib\site-packages\openai\api_resources\chat_completion.py:25, in ChatCompletion.create(cls, *args, **kwargs)
23 while True:
24 try:
---> 25 return super().create(*args, **kwargs)
26 except TryAgain as e:
27 if timeout is not None and time.time() > start + timeout:
File ~\venv\py311openai\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py:153, in EngineAPIResource.create(cls, api_key, api_base, api_type, request_id, api_version, organization, **params)
127 @classmethod
128 def create(
129 cls,
(...)
136 **params,
137 ):
138 (
139 deployment_id,
140 engine,
(...)
150 api_key, api_base, api_type, api_version, organization, **params
151 )
--> 153 response, _, api_key = requestor.request(
154 "post",
155 url,
156 params=params,
157 headers=headers,
158 stream=stream,
159 request_id=request_id,
160 request_timeout=request_timeout,
161 )
163 if stream:
164 # must be an iterator
165 assert not isinstance(response, OpenAIResponse)
File ~\venv\py311openai\Lib\site-packages\openai\api_requestor.py:298, in APIRequestor.request(self, method, url, params, headers, files, stream, request_id, request_timeout)
277 def request(
278 self,
279 method,
(...)
286 request_timeout: Optional[Union[float, Tuple[float, float]]] = None,
287 ) -> Tuple[Union[OpenAIResponse, Iterator[OpenAIResponse]], bool, str]:
288 result = self.request_raw(
289 method.lower(),
290 url,
(...)
296 request_timeout=request_timeout,
297 )
--> 298 resp, got_stream = self._interpret_response(result, stream)
299 return resp, got_stream, self.api_key
File ~\venv\py311openai\Lib\site-packages\openai\api_requestor.py:700, in APIRequestor._interpret_response(self, result, stream)
692 return (
693 self._interpret_response_line(
694 line, result.status_code, result.headers, stream=True
695 )
696 for line in parse_stream(result.iter_lines())
697 ), True
698 else:
699 return (
--> 700 self._interpret_response_line(
701 result.content.decode("utf-8"),
702 result.status_code,
703 result.headers,
704 stream=False,
705 ),
706 False,
707 )
File ~\venv\py311openai\Lib\site-packages\openai\api_requestor.py:763, in APIRequestor._interpret_response_line(self, rbody, rcode, rheaders, stream)
761 stream_error = stream and "error" in resp.data
762 if stream_error or not 200 <= rcode < 300:
--> 763 raise self.handle_error_response(
764 rbody, rcode, resp.data, rheaders, stream_error=stream_error
765 )
766 return resp
InvalidRequestError: Unrecognized request arguments supplied: function_call, functions
```
### Idea or request for content:
_No response_ | InvalidRequestError: Unrecognized request arguments supplied: function_call, functions | https://api.github.com/repos/langchain-ai/langchain/issues/8593/comments | 20 | 2023-08-01T19:06:48Z | 2024-04-30T16:31:10Z | https://github.com/langchain-ai/langchain/issues/8593 | 1,831,864,866 | 8,593 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello there,
I am currently attempting to add combined memory to ConversationalRetrievalChain (CRC). However, I am unsure if combined memory is compatible with CRC. I have made a few modifications, but unfortunately, I have encountered errors preventing it from running properly. I was hoping that someone could assist me in resolving this issue. Thank you.
```
from langchain.prompts import PromptTemplate
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferWindowMemory, CombinedMemory, ConversationSummaryMemory
chat_history = []
conv_memory = ConversationBufferWindowMemory(
memory_key="chat_history_lines",
input_key="input",
output_key='answer',
k=1
)
summary_memory = ConversationSummaryMemory(llm=turbo_llm, input_key="input", output_key='answer')
# Combined
memory = CombinedMemory(memories=[conv_memory, summary_memory])
_DEFAULT_TEMPLATE = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
{context}
Summary of conversation:
{history}
Current conversation:
{chat_history_lines}
Human: {input}
AI:"""
PROMPT_= PromptTemplate(
input_variables=["history", "input", "context", "chat_history_lines"],
template=_DEFAULT_TEMPLATE
)
qa = ConversationalRetrievalChain.from_llm(
llm=turbo_llm,
chain_type="stuff",
memory=memory,
retriever=retriever,
return_source_documents=True,
return_generated_question=True,
combine_docs_chain_kwargs={'prompt': PROMPT_})
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history})`
```
Error:
`ValueError: Missing some input keys: {'input'}`
when I replace "input" with "question" in _DEFAULT_TEMPLATE and PROMPT_
then the error is :
`ValueError: One output key expected, got dict_keys(['answer', 'source_documents', 'generated_question'])
`
### Suggestion:
_No response_ | Missing some input keys: {'input'} when using Combined memory and ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/8590/comments | 4 | 2023-08-01T18:08:07Z | 2023-11-08T16:07:09Z | https://github.com/langchain-ai/langchain/issues/8590 | 1,831,784,534 | 8,590 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain = "^0.0.248"
### Who can help?
@hwchase17 @agola11
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
import os
import hashlib
import langchain
from gptcache import Cache # type: ignore
from gptcache.manager.factory import manager_factory # type: ignore
from gptcache.processor.pre import get_prompt # type: ignore
from langchain.cache import GPTCache
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
os.environ["OPENAI_API_KEY"] = "sk-..."
llm = ChatOpenAI(temperature=0) # type: ignore
def get_hashed_name(name):
return hashlib.sha256(name.encode()).hexdigest()
def init_gptcache(cache_obj: Cache, llm: str):
hashed_llm = get_hashed_name(llm)
cache_obj.init(
pre_embedding_func=get_prompt,
data_manager=manager_factory(manager="map", data_dir=f"map_cache_{hashed_llm}"),
)
langchain.llm_cache = GPTCache(init_gptcache)
# The first time, it is not yet in cache, so it should take longer
response = llm("Tell me a joke")
print(f"Response 1: {response}")
# The first time, it is not yet in cache, so it should take longer
response = llm("Tell me a joke")
print(f"Response 2: {response}")
```
### Expected behavior
Response 1:
Why did the chicken cross the road?
To get to the other side.
Response 2:
Why did the chicken cross the road?
To get to the other side. | GPTCache implementation isnt working with ChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/8584/comments | 3 | 2023-08-01T14:51:37Z | 2023-12-04T16:06:03Z | https://github.com/langchain-ai/langchain/issues/8584 | 1,831,458,479 | 8,584 |
[
"hwchase17",
"langchain"
]
| ### System Info
Verified in dockerimage python:3.9
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
how to reproduce?
1. docker run -it python:3.9 bash
2. pip install langchain
3. pip install openai
4. run script
```
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import SystemMessage
chat_model = ChatOpenAI(n=3)
prompt = "What is the capital of France?"
message = SystemMessage(content=prompt)
responses = chat_model.predict_messages([message], n=3)
print(responses)
```
output:
content='The capital of France is Paris.' additional_kwargs={} example=False
### Expected behavior
Hello,
When running the script I expect to get 3 responses, because I set the parameter `n=3` during initialization of ChatOpenAI and in the call of predict_messages.
still the response is a single answer!
please let me know how to correctly use the parameter n or fix the current behavior!
best regards,
LW | Cannot create "n" responses | https://api.github.com/repos/langchain-ai/langchain/issues/8581/comments | 4 | 2023-08-01T13:55:11Z | 2023-08-02T16:36:44Z | https://github.com/langchain-ai/langchain/issues/8581 | 1,831,347,798 | 8,581 |
[
"hwchase17",
"langchain"
]
| ### Feature request
`ConversationSummaryMemory` and `ConversationSummaryBufferMemory` can be used as memory within async conversational chains, but they themselves are fully blocking. In particular, `Chain.acall()` is async, but it calls `Chain.prep_outputs()`, which calls `self.memory.save_context()`. In the case of `ConversationSummaryBufferMemory`, `save_context()` calls `prune()`, which calls `SummarizerMixin.predict_new_summary()`, which creates an `LLMChain` and then calls `LLMChain.predict()`. We really need to be able to specify an async handler when we create the memory object, pass that through to the `LLMChain`, and ultimately `await LLMChain.apredict()`.
### Motivation
The blocking nature of the `LLMChain.predict()` calls means that otherwise-async chains end up blocking regularly. If you use an ASGI server like `uvicorn`, this means that lots of requests can end up waiting. You can wrap `uvicorn` in a WSGI server like `gunicorn` so that there are multiple processes available, but blocking one whole process every time the memory object needs to summarize can still block any websocket users connected to the blocked process. It's just not good.
### Your contribution
We're going to have to solve this problem for ourselves, and we'd be happy to try to contribute a solution back to the community. However, we'd be first-time contributors and we're not sure whether there are others already making similar improvements. | Need support for async memory, especially for ConversationSummaryMemory and ConversationSummaryBufferMemory | https://api.github.com/repos/langchain-ai/langchain/issues/8580/comments | 7 | 2023-08-01T13:32:50Z | 2023-11-07T22:50:05Z | https://github.com/langchain-ai/langchain/issues/8580 | 1,831,301,181 | 8,580 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi everyone,
I'm having trouble working with the agent structure in Langchain.
I would like to have an agent that has memory and can return intermediate steps.
However It doesn't work, when I make an agent executor, the return_intermediate_steps=True makes it crash as the memory cannot read the output and when I use the agent directly, the return_intermediate_steps=True does nothing.
Do you know if there is a way to have an agent with memory and that can return intermediate steps at the same time ?
Thanks for your help
### Suggestion:
_No response_ | Issue: Agent with memory and intermediate steps | https://api.github.com/repos/langchain-ai/langchain/issues/8579/comments | 7 | 2023-08-01T13:22:18Z | 2024-02-13T05:59:52Z | https://github.com/langchain-ai/langchain/issues/8579 | 1,831,277,961 | 8,579 |
[
"hwchase17",
"langchain"
]
| ### System Info
### langchain Version **0.0.249**
### Python Version **3.8**
### Other notes
Using this on notebook in Azure Synapse Studio
### Error
```
When import happening getting bellow error message
TypeError Traceback (most recent call last)
/tmp/ipykernel_17180/722847199.py in <module>
----> 1 from langchain.vectorstores import Pinecone
2 from langchain.embeddings.openai import OpenAIEmbeddings
3
4 import datetime
5 from datetime import date, timedelta
~/cluster-env/clonedenv/lib/python3.8/site-packages/langchain/__init__.py in <module>
4 from typing import Optional
5
----> 6 from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
7 from langchain.cache import BaseCache
8 from langchain.chains import (
~/cluster-env/clonedenv/lib/python3.8/site-packages/langchain/agents/__init__.py in <module>
1 """Interface for agents."""
----> 2 from langchain.agents.agent import (
3 Agent,
4 AgentExecutor,
5 AgentOutputParser,
~/cluster-env/clonedenv/lib/python3.8/site-packages/langchain/agents/agent.py in <module>
13 from pydantic import BaseModel, root_validator
14
---> 15 from langchain.agents.agent_iterator import AgentExecutorIterator
16 from langchain.agents.agent_types import AgentType
17 from langchain.agents.tools import InvalidTool
~/cluster-env/clonedenv/lib/python3.8/site-packages/langchain/agents/agent_iterator.py in <module>
19 )
20
---> 21 from langchain.callbacks.manager import (
22 AsyncCallbackManager,
23 AsyncCallbackManagerForChainRun,
~/cluster-env/clonedenv/lib/python3.8/site-packages/langchain/callbacks/__init__.py in <module>
1 """Callback handlers that allow listening to events in LangChain."""
2
----> 3 from langchain.callbacks.aim_callback import AimCallbackHandler
4 from langchain.callbacks.argilla_callback import ArgillaCallbackHandler
5 from langchain.callbacks.arize_callback import ArizeCallbackHandler
~/cluster-env/clonedenv/lib/python3.8/site-packages/langchain/callbacks/aim_callback.py in <module>
3
4 from langchain.callbacks.base import BaseCallbackHandler
----> 5 from langchain.schema import AgentAction, AgentFinish, LLMResult
6
7
~/cluster-env/clonedenv/lib/python3.8/site-packages/langchain/schema/__init__.py in <module>
1 from langchain.schema.agent import AgentAction, AgentFinish
2 from langchain.schema.document import BaseDocumentTransformer, Document
----> 3 from langchain.schema.memory import BaseChatMessageHistory, BaseMemory
4 from langchain.schema.messages import (
5 AIMessage,
~/cluster-env/clonedenv/lib/python3.8/site-packages/langchain/schema/memory.py in <module>
5
6 from langchain.load.serializable import Serializable
----> 7 from langchain.schema.messages import AIMessage, BaseMessage, HumanMessage
8
9
~/cluster-env/clonedenv/lib/python3.8/site-packages/langchain/schema/messages.py in <module>
136
137
--> 138 class HumanMessageChunk(HumanMessage, BaseMessageChunk):
139 pass
140
~/cluster-env/clonedenv/lib/python3.8/site-packages/pydantic/main.cpython-38-x86_64-linux-gnu.so in pydantic.main.ModelMetaclass.__new__()
~/cluster-env/clonedenv/lib/python3.8/abc.py in __new__(mcls, name, bases, namespace, **kwargs)
83 """
84 def __new__(mcls, name, bases, namespace, **kwargs):
---> 85 cls = super().__new__(mcls, name, bases, namespace, **kwargs)
86 _abc_init(cls)
87 return cls
TypeError: multiple bases have instance lay-out conflict
```
### Other installed packages in the system
Package Version
----------------------------- -------------------
absl-py 0.13.0
adal 1.2.7
adlfs 0.7.7
aiohttp 3.8.5
aiosignal 1.3.1
annotated-types 0.5.0
appdirs 1.4.4
applicationinsights 0.11.10
argon2-cffi 21.3.0
argon2-cffi-bindings 21.2.0
astor 0.8.1
astunparse 1.6.3
async-timeout 4.0.2
attrs 21.2.0
azure-common 1.1.27
azure-core 1.16.0
azure-datalake-store 0.0.51
azure-graphrbac 0.61.1
azure-identity 1.4.1
azure-mgmt-authorization 0.61.0
azure-mgmt-containerregistry 8.0.0
azure-mgmt-core 1.3.0
azure-mgmt-keyvault 2.2.0
azure-mgmt-resource 13.0.0
azure-mgmt-storage 11.2.0
azure-storage-blob 12.8.1
azure-synapse-ml-predict 1.0.0
azureml-core 1.34.0
azureml-dataprep 2.22.2
azureml-dataprep-native 38.0.0
azureml-dataprep-rslex 1.20.2
azureml-dataset-runtime 1.34.0
azureml-mlflow 1.34.0
azureml-opendatasets 1.34.0
azureml-synapse 0.0.1
azureml-telemetry 1.34.0
backcall 0.2.0
backports.functools-lru-cache 1.6.4
backports.tempfile 1.0
backports.weakref 1.0.post1
beautifulsoup4 4.9.3
bleach 5.0.1
blinker 1.4
bokeh 2.3.2
Brotli 1.0.9
brotlipy 0.7.0
cachetools 4.2.2
certifi 2021.5.30
cffi 1.14.5
chardet 4.0.0
charset-normalizer 3.2.0
click 8.0.1
cloudpickle 1.6.0
conda-package-handling 1.7.3
configparser 5.0.2
contextlib2 0.6.0.post1
cryptography 3.4.7
cycler 0.10.0
Cython 0.29.23
cytoolz 0.11.0
dash 1.20.0
dash-core-components 1.16.0
dash-cytoscape 0.2.0
dash-html-components 1.1.3
dash-renderer 1.9.1
dash-table 4.11.3
dask 2021.6.2
databricks-cli 0.12.1
dataclasses-json 0.5.14
debugpy 1.3.0
decorator 4.4.2
defusedxml 0.7.1
dill 0.3.4
distlib 0.3.6
distro 1.7.0
dnspython 2.4.1
docker 4.4.4
dotnetcore2 2.1.23
entrypoints 0.3
et-xmlfile 1.1.0
fastjsonschema 2.16.1
filelock 3.8.0
fire 0.4.0
Flask 2.0.1
Flask-Compress 0.0.0
flatbuffers 1.12
frozenlist 1.4.0
fsspec 2021.10.0
fsspec-wrapper 0.1.6
fusepy 3.0.1
future 0.18.2
gast 0.3.3
gensim 3.8.3
geographiclib 1.52
geopy 2.1.0
gevent 21.1.2
gitdb 4.0.7
GitPython 3.1.18
google-auth 1.32.1
google-auth-oauthlib 0.4.1
google-pasta 0.2.0
greenlet 1.1.0
grpcio 1.37.1
h5py 2.10.0
html5lib 1.1
hummingbird-ml 0.4.0
idna 2.10
imagecodecs 2021.3.31
imageio 2.9.0
importlib-metadata 4.6.1
importlib-resources 5.9.0
ipykernel 6.0.1
ipython 7.23.1
ipython-genutils 0.2.0
ipywidgets 7.6.3
isodate 0.6.0
itsdangerous 2.0.1
jdcal 1.4.1
jedi 0.18.0
jeepney 0.6.0
Jinja2 3.0.1
jmespath 0.10.0
joblib 1.0.1
jsonpickle 2.0.0
jsonschema 4.15.0
jupyter-client 6.1.12
jupyter-core 4.7.1
jupyterlab-pygments 0.2.2
jupyterlab-widgets 3.0.3
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.2
keras2onnx 1.6.5
kiwisolver 1.3.1
koalas 1.8.0
KqlmagicCustom 0.1.114.post8
langchain 0.0.249
langsmith 0.0.16
liac-arff 2.5.0
library-metadata-cooker 0.0.7
lightgbm 3.2.1
lime 0.2.0.1
llvmlite 0.36.0
locket 0.2.1
loguru 0.7.0
lxml 4.6.5
Markdown 3.3.4
MarkupSafe 2.0.1
marshmallow 3.20.1
matplotlib 3.4.2
matplotlib-inline 0.1.2
mistune 2.0.4
mleap 0.17.0
mlflow-skinny 1.18.0
msal 1.12.0
msal-extensions 0.2.2
msrest 0.6.21
msrestazure 0.6.4
multidict 5.1.0
mypy 0.780
mypy-extensions 0.4.3
nbclient 0.6.7
nbconvert 7.0.0
nbformat 5.4.0
ndg-httpsclient 0.5.1
nest-asyncio 1.5.5
networkx 2.5.1
nltk 3.6.2
notebook 6.4.12
notebookutils 3.1.2-20230518.1
numba 0.53.1
numexpr 2.8.4
numpy 1.24.4
oauthlib 3.1.1
olefile 0.46
onnx 1.9.0
onnxconverter-common 1.7.0
onnxmltools 1.7.0
onnxruntime 1.7.2
openai 0.27.8
openapi-schema-pydantic 1.2.4
openpyxl 3.0.7
opt-einsum 3.3.0
packaging 21.0
pandas 1.2.3
pandasql 0.7.3
pandocfilters 1.5.0
parso 0.8.2
partd 1.2.0
pathspec 0.8.1
patsy 0.5.1
pexpect 4.8.0
pickleshare 0.7.5
Pillow 8.2.0
pinecone-client 2.2.2
pip 23.2.1
pkgutil_resolve_name 1.3.10
platformdirs 2.5.2
plotly 4.14.3
pmdarima 1.8.2
pooch 1.4.0
portalocker 1.7.1
prettytable 2.4.0
prometheus-client 0.14.1
prompt-toolkit 3.0.19
protobuf 3.15.8
psutil 5.8.0
ptyprocess 0.7.0
py4j 0.10.9
pyarrow 3.0.0
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycairo 1.20.1
pycosat 0.6.3
pycparser 2.20
pydantic 1.8.2
pydantic_core 2.4.0
Pygments 2.9.0
PyGObject 3.40.1
PyJWT 2.1.0
pyodbc 4.0.30
pyOpenSSL 20.0.1
pyparsing 2.4.7
pyperclip 1.8.2
PyQt5 5.12.3
PyQt5_sip 4.19.18
PyQtChart 5.12
PyQtWebEngine 5.12.1
pyrsistent 0.18.1
PySocks 1.7.1
pyspark 3.1.2
python-dateutil 2.8.1
pytz 2021.1
pyu2f 0.1.5
PyWavelets 1.1.1
PyYAML 5.4.1
pyzmq 22.1.0
regex 2023.6.3
requests 2.31.0
requests-oauthlib 1.3.0
retrying 1.3.3
rsa 4.7.2
ruamel.yaml 0.17.4
ruamel.yaml.clib 0.2.6
ruamel-yaml-conda 0.15.100
SALib 1.3.11
scikit-image 0.18.1
scikit-learn 0.23.2
scipy 1.5.3
seaborn 0.11.1
SecretStorage 3.3.1
Send2Trash 1.8.0
setuptools 49.6.0.post20210108
shap 0.39.0
six 1.16.0
skl2onnx 1.8.0
sklearn-pandas 2.2.0
slicer 0.0.7
smart-open 5.1.0
smmap 3.0.5
soupsieve 2.2.1
SQLAlchemy 1.4.20
sqlanalyticsconnectorpy 1.0.1
statsmodels 0.12.2
synapseml-cognitive 0.10.2.dev1
synapseml-core 0.10.2.dev1
synapseml-deep-learning 0.10.2.dev1
synapseml-internal 0.0.0.dev1
synapseml-lightgbm 0.10.2.dev1
synapseml-opencv 0.10.2.dev1
synapseml-vw 0.10.2.dev1
tabulate 0.8.9
tenacity 8.2.2
tensorboard 2.4.1
tensorboard-plugin-wit 1.8.0
tensorflow 2.4.1
tensorflow-estimator 2.4.0
termcolor 1.1.0
terminado 0.15.0
textblob 0.15.3
threadpoolctl 2.1.0
tifffile 2021.4.8
tiktoken 0.4.0
tinycss2 1.1.1
toolz 0.11.1
torch 1.8.1
torchvision 0.9.1
tornado 6.1
tqdm 4.65.0
traitlets 5.0.5
typed-ast 1.4.3
typing_extensions 4.5.0
typing-inspect 0.8.0
urllib3 1.26.4
virtualenv 20.14.0
wcwidth 0.2.5
webencodings 0.5.1
websocket-client 1.1.0
Werkzeug 2.0.1
wheel 0.36.2
widgetsnbextension 3.5.2
wrapt 1.12.1
xgboost 1.4.0
XlsxWriter 3.0.3
yarl 1.6.3
zipp 3.5.0
zope.event 4.5.0
zope.interface 5.4.0
``
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
Go to Azure Synapse Studio
Open a Notebook
Select Pyspark(Python) as Language
Please note your node should run python 3.8
Then put bellow
### Installed bellow to the session.
```
!pip install --upgrade pip
!pip install tqdm
!pip install pinecone-client
!pip install typing-extensions==4.5.0
!pip install langchain
!pip install openai
!pip install tiktoken
```
### Here are my imports
```
from langchain.vectorstores import Pinecone
from langchain.embeddings.openai import OpenAIEmbeddings
import datetime
from datetime import date, timedelta
import time
import csv
import openai
import requests
import pandas as pd
from pyspark.sql import SparkSession
from pyspark.sql.functions import col, lit, current_timestamp
from pyspark.sql import functions as F
from pyspark.sql import Row
from pyspark.sql.utils import AnalysisException
from pyspark.sql.types import StructType, StructField, StringType, TimestampType, LongType
from pyspark.sql.window import Window
from pyspark.sql import types as T
import concurrent.futures
import pinecone
import tiktoken
```
Run the notebook.
### Expected behavior
Import should be happend without any issues. | multiple bases have instance lay-out conflict on HumanMessageChunk class on langchain 0.0.249 | https://api.github.com/repos/langchain-ai/langchain/issues/8577/comments | 24 | 2023-08-01T12:08:08Z | 2024-08-08T16:06:49Z | https://github.com/langchain-ai/langchain/issues/8577 | 1,831,132,425 | 8,577 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
This repo was split up between the core package and some experimental package. While this is a good idea, it's kinda odd to have both packages in the same repo. Even further you're releasing versions for both of these packages within the same repo. So 5 days ago there was a 0.0.5 release while later on you had 0.0.249.
I've actually never seen such a structure before and it's messing up scripted automation to follow releases.
### Suggestion:
Create a new repo for `LangChain Experimental` and have those releases there. | Issue: Structure of this repo is confusing | https://api.github.com/repos/langchain-ai/langchain/issues/8572/comments | 1 | 2023-08-01T08:48:57Z | 2023-11-07T16:05:58Z | https://github.com/langchain-ai/langchain/issues/8572 | 1,830,771,603 | 8,572 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Enhance the retrieval callback hooks to include:
- document IDs from the underlying vector store
- document embeddings used during retrieval
### Motivation
I want to build a callback handler that enables LangChain users to visualize their data in [Phoenix](https://github.com/Arize-ai/phoenix), an open-source tool that provides debugging workflows for retrieval-augmented generation. At the moment, I am only able to get retrieved document text from the callback system, not the IDs or embeddings of the retrieved documents.
### Your contribution
I am willing to implement, test, and document this feature with guidance from the LangChain team. I am also happy to provide feedback on an implementation by the LangChain team by building an example callback handler using the enhancement retrieval hook functionality. | Enhance retrieval callback hooks to capture retrieved document IDs and embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/8569/comments | 4 | 2023-08-01T07:44:33Z | 2024-02-13T16:14:38Z | https://github.com/langchain-ai/langchain/issues/8569 | 1,830,662,153 | 8,569 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Make cosine similarity (or other retrieval metric, e.g., Euclidean distance) between query and retrieved documents available at the `retriever` callback hook.
Currently, these scores are computed during retrieval and are discarded before they become available to the retrieval callback hook:
https://github.com/langchain-ai/langchain/blob/125ae6d9dec440e04a60f63f0f4fc8411f482df8/libs/langchain/langchain/vectorstores/base.py#L500
https://github.com/langchain-ai/langchain/blob/125ae6d9dec440e04a60f63f0f4fc8411f482df8/libs/langchain/langchain/schema/retriever.py#L174
### Motivation
I want to build a callback handler that enables LangChain users to visualize their data in [Phoenix](https://github.com/Arize-ai/phoenix), an open-source tool that provides debugging workflows for retrieval-augmented generation. At the moment, it is not possible to get the similarity scores between queries and retrieved documents out of LangChain's callback system, for example, when using the `RetrievalQA` chain. Here is an [example notebook](https://github.com/Arize-ai/phoenix/blob/main/tutorials/langchain_pinecone_search_and_retrieval_tutorial.ipynb) where I sub-class `Pinecone` to get out the similarity scores:
```
class PineconeWrapper(Pinecone):
query_text_to_document_score_tuples: Dict[str, List[Tuple[Document, float]]] = {}
def similarity_search_with_score(
self,
query: str,
k: int = 4,
filter: Optional[dict] = None,
namespace: Optional[str] = None,
) -> List[Tuple[Document, float]]:
document_score_tuples = super().similarity_search_with_score(
query=query,
k=k,
filter=filter,
namespace=namespace,
)
self.query_text_to_document_score_tuples[query] = document_score_tuples
return document_score_tuples
@property
def retrieval_dataframe(self) -> pd.DataFrame:
query_texts = []
document_texts = []
retrieval_ranks = []
scores = []
for query_text, document_score_tuples in self.query_text_to_document_score_tuples.items():
for retrieval_rank, (document, score) in enumerate(document_score_tuples):
query_texts.append(query_text)
document_texts.append(document.page_content)
retrieval_ranks.append(retrieval_rank)
scores.append(score)
return pd.DataFrame.from_dict(
{
"query_text": query_texts,
"document_text": document_texts,
"retrieval_rank": retrieval_ranks,
"score": scores,
}
)
```
I would like the LangChain callback system to support this use-case.
### Your contribution
I am willing to implement, test, and document this feature with guidance from the LangChain team. I am also happy to provide feedback on an implementation by the LangChain team by building an example callback handler using the enhancement retrieval hook functionality. | Enhance retrieval callback hooks to include information on cosine similarity scores or other retrieval metrics | https://api.github.com/repos/langchain-ai/langchain/issues/8567/comments | 6 | 2023-08-01T07:18:29Z | 2024-06-01T00:07:32Z | https://github.com/langchain-ai/langchain/issues/8567 | 1,830,623,474 | 8,567 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.