issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
is this implemented?
https://github.com/langchain-ai/langchain/pull/1222/commits
### Suggestion:
_No response_ | streaming | https://api.github.com/repos/langchain-ai/langchain/issues/10038/comments | 2 | 2023-08-31T09:27:42Z | 2023-12-07T16:05:40Z | https://github.com/langchain-ai/langchain/issues/10038 | 1,875,139,090 | 10,038 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi all😊 ,
I work in a network-isolated SageMaker environment. There I have hosted a llama 2.0 -7b chat Inference Endpoint (from [HF ](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf))
For a project I would like to work with LangChain.
Currently I am failing with the embeddings.
I can't use the HF embedding engine because of the network isolation.
hf_embedding = HuggingFaceInstructEmbeddings()
Alternatively I found a SageMaker embedding:
https://python.langchain.com/docs/integrations/text_embedding/sagemaker-endpoint
I used the same code from the docu:
```python
from typing import Dict, List
from langchain.embeddings import SagemakerEndpointEmbeddings
from langchain.embeddings.sagemaker_endpoint import EmbeddingsContentHandler
import json
class ContentHandler(EmbeddingsContentHandler):
content_type = "application/json"
accepts = "application/json"
def transform_input(self, inputs: list[str], model_kwargs: Dict) -> bytes:
"""
Transforms the input into bytes that can be consumed by SageMaker endpoint.
Args:
inputs: List of input strings.
model_kwargs: Additional keyword arguments to be passed to the endpoint.
Returns:
The transformed bytes input.
"""
# Example: inference.py expects a JSON string with a "inputs" key:
input_str = json.dumps({"inputs": inputs, **model_kwargs})
return input_str.encode("utf-8")
def transform_output(self, output: bytes) -> List[List[float]]:
"""
Transforms the bytes output from the endpoint into a list of embeddings.
Args:
output: The bytes output from SageMaker endpoint.
Returns:
The transformed output - list of embeddings
Note:
The length of the outer list is the number of input strings.
The length of the inner lists is the embedding dimension.
"""
# Example: inference.py returns a JSON string with the list of
# embeddings in a "vectors" key:
response_json = json.loads(output.read().decode("utf-8"))
return response_json["vectors"]`
content_handler = ContentHandler()
embeddings = SagemakerEndpointEmbeddings(
endpoint_name=ENDPOINT_NAME,
region_name=REGION_NAME,
content_handler=content_handler ,
)
query_result = embeddings.embed_query("foo")
```
But i get the following error:
`---------------------------------------------------------------------------
ModelError Traceback (most recent call last)
File /opt/conda/lib/python3.10/site-packages/langchain/embeddings/sagemaker_endpoint.py:153, in SagemakerEndpointEmbeddings._embedding_func(self, texts)
[...]
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (422) from primary with message "Failed to deserialize the JSON body into the target type: inputs: invalid type: sequence, expected a string at line 1 column 11". See https://eu-central-1.console.aws.amazon.com/cloudwatch/home?xxxxx in account xxxxxx for more information.`
### Suggestion:
_No response_ | Problems with using LangChain Sagemaker Embedding for Llama 2.0 Inference Endpoint in Sagemaker | https://api.github.com/repos/langchain-ai/langchain/issues/10037/comments | 3 | 2023-08-31T09:09:17Z | 2024-01-18T20:18:04Z | https://github.com/langchain-ai/langchain/issues/10037 | 1,875,107,736 | 10,037 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
The idea is, that I have a vector store with a conversational retrieval chain.from_llm, and I want to create other functions such as send an email, etc. so that when a user queries for something, it determines if it should use the Conv retrieval chain or the other functions such as sending an email function, and it seems I need to use the router to achieve this, but when I do, I get a lot of errors such as
```
"default_chain -> prompt
field required (type=value_error.missing)
default_chain -> llm
field required (type=value_error.missing)
default_chain -> combine_docs_chain
extra fields not permitted (type=value_error.extra)
```
How do i integrate with conv retrieval chain with routers to achieve my purpose, the only examples i have seen do not use conv retrieval chain
### Suggestion:
_No response_ | Conversational Retrieval Chain.from_llm integration with router chains | https://api.github.com/repos/langchain-ai/langchain/issues/10035/comments | 2 | 2023-08-31T08:32:26Z | 2023-12-07T16:05:44Z | https://github.com/langchain-ai/langchain/issues/10035 | 1,875,045,957 | 10,035 |
[
"hwchase17",
"langchain"
]
| ### Feature request
while parsing LLM output with `pydantic_object` it would be nice to send to the `parse` function `context` object , like `pydantic` docs does here: https://docs.pydantic.dev/latest/usage/validators/#validation-context
### Motivation
In order to do validation on LLM output rather than just parse it, in some cases to do the validation we need some context.
### Your contribution
This change may require to support `pydantic` v2 not sure about backword capabilities.
in `PydanticOutputParser.parse` function insted of `parse_obj` we should use `model_validate` then we could send context object(it can be optional) to `model_validate`
from pydantic:
```python
@typing_extensions.deprecated(
'The `parse_obj` method is deprecated; use `model_validate` instead.', category=PydanticDeprecatedSince20
)
def parse_obj(cls: type[Model], obj: Any) -> Model: # noqa: D102
warnings.warn('The `parse_obj` method is deprecated; use `model_validate` instead.', DeprecationWarning)
return cls.model_validate(obj)
``` | context to pydantic_object | https://api.github.com/repos/langchain-ai/langchain/issues/10034/comments | 2 | 2023-08-31T08:23:33Z | 2023-12-07T16:05:50Z | https://github.com/langchain-ai/langchain/issues/10034 | 1,875,031,907 | 10,034 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I'm utilizing BaseMessagePromptTemplate with external DBs, for version control.
However, dumping BaseMessagePromptTemplate to json and loading to original Template is difficult, as retrieving exact type of message type from json is not possible.
Therefore, it would be useful to add _msg_type property on BaseMessagePromptTemplate,
like In [BasePromptTemplate dict method](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/schema/prompt_template.py#L108-L116).
### Motivation
Dumping ChatPromptTemplate to json and loading to original Template is difficult,
as retrieving exact type of MessageLike (Union[BaseMessagePromptTemplate, BaseMessage, BaseChatPromptTemplate]) property is difficult.
### Your contribution
If you allow me, I'd like to make a pull request regarding to this. | Add message type property method on BaseMessagePromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/10033/comments | 1 | 2023-08-31T08:23:10Z | 2023-09-01T00:13:23Z | https://github.com/langchain-ai/langchain/issues/10033 | 1,875,031,189 | 10,033 |
[
"hwchase17",
"langchain"
]
| ### Feature request
When using `RetrievalQAWithSourcesChain`, the chain call can't accept `search_kwargs` to pass then to the retriever like this:
```python
response = chain({"question": query ,'search_kwargs': search_kwargs})
```
In particular, I tried with `Milvus` and `MilvusRetriever` and natively could't find a way.
### Motivation
Instead to create the same chain but with different `search_kwargs` to be sended to the retriever, it would be usefull to allow [optionally] the search_kwargs sendeed dynamically in the call.
### Your contribution
I could allow this behaevoir with the following modification:
Adding a `search_kwargs_key` to take `search_kwargs` that then will be sended to self.retriever.get_relevant_documents( ..., `**search_kwargs`,...).
```python
class customRetrievalQAWithSourcesChain(RetrievalQAWithSourcesChain):
search_kwargs_key:str = "search_kwargs"
def _get_docs(
self, inputs: Dict[str, Any], *, run_manager: CallbackManagerForChainRun
) -> List[Document]:
question = inputs[self.question_key]
search_kwargs = inputs[self.search_kwargs_key]
docs = self.retriever.get_relevant_documents(
question, **search_kwargs, callbacks=run_manager.get_child()
)
return self._reduce_tokens_below_limit(docs)
async def _aget_docs(
self, inputs: Dict[str, Any], *, run_manager: AsyncCallbackManagerForChainRun
) -> List[Document]:
question = inputs[self.question_key]
search_kwargs = inputs[self.search_kwargs_key]
docs = await self.retriever.aget_relevant_documents(
question,**search_kwargs, callbacks=run_manager.get_child()
)
return self._reduce_tokens_below_limit(docs)
```
And finally allowing to VectorStoreRetriever take `**search_kwargs` instead of `self.search_kwargs`
```python
class customRetriever(VectorStoreRetriever):
def _get_relevant_documents(
self, query: str, *, run_manager: CallbackManagerForRetrieverRun, **search_kwargs: Any,
) -> List[Document]:
if self.search_type == "similarity":
docs = self.vectorstore.similarity_search(query, **search_kwargs)
elif self.search_type == "similarity_score_threshold":
docs_and_similarities = (
self.vectorstore.similarity_search_with_relevance_scores(
query, **search_kwargs
)
)
docs = [doc for doc, _ in docs_and_similarities]
elif self.search_type == "mmr":
docs = self.vectorstore.max_marginal_relevance_search(
query, **search_kwargs
)
else:
raise ValueError(f"search_type of {self.search_type} not allowed.")
return docs
async def _aget_relevant_documents(
self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun, **search_kwargs: Any,
) -> List[Document]:
if self.search_type == "similarity":
docs = await self.vectorstore.asimilarity_search(
query, **search_kwargs
)
elif self.search_type == "similarity_score_threshold":
docs_and_similarities = (
await self.vectorstore.asimilarity_search_with_relevance_scores(
query, **search_kwargs
)
)
docs = [doc for doc, _ in docs_and_similarities]
elif self.search_type == "mmr":
docs = await self.vectorstore.amax_marginal_relevance_search(
query, **search_kwargs
)
else:
raise ValueError(f"search_type of {self.search_type} not allowed.")
return docs
```
If you consider this usefull I could open a PR (confirm please). I not, maybe someone can find this usefull.
Best regards, | Dynamic "search_kwargs" during RetrievalQAWithSourcesChain call | https://api.github.com/repos/langchain-ai/langchain/issues/10031/comments | 2 | 2023-08-31T08:06:10Z | 2023-12-05T17:47:16Z | https://github.com/langchain-ai/langchain/issues/10031 | 1,875,002,896 | 10,031 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Does anyone created agent with tool which takes "dataframe" and "user_input" as input variables in langchain?
I do not want use dataframe agent which is there already in langchain. As I need pass further instruction in the prompt template.
### Suggestion:
_No response_ | Does anyone created agent with tool which takes "dataframe" and "user_input" as input variables in langchain? | https://api.github.com/repos/langchain-ai/langchain/issues/10028/comments | 3 | 2023-08-31T06:46:58Z | 2023-12-07T16:05:55Z | https://github.com/langchain-ai/langchain/issues/10028 | 1,874,888,777 | 10,028 |
[
"hwchase17",
"langchain"
]
| Can anyone tell me the difference between these two parameters? When setting up the ChatOpenAI model
| Issue: I dont know what the meaning of OPENAI_API_BASE and OPENAI_PROXY in ChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/10027/comments | 1 | 2023-08-31T06:35:39Z | 2023-08-31T08:32:27Z | https://github.com/langchain-ai/langchain/issues/10027 | 1,874,874,756 | 10,027 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello,
I am facing some issues with the code I am trying to run as it did run perfectly up until recently when an Error started being thrown out as AttributeError: 'list' object has no attribute 'embedding'. Below is the traceback if the error. Please do let me know if some code snippet excerpts will also be needed to facilitate debugging the code.
Traceback:
File "/home/ataliba/llm/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "/home/ataliba/LLM_Workshop/Experimental_Lama_QA_Retrieval/Andro_GPT_Llama2.py", line 268, in <module>
response = qa_chain.run(user_query, callbacks=[cb])
File "/home/ataliba/llm/lib/python3.10/site-packages/langchain/chains/base.py", line 481, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/home/ataliba/llm/lib/python3.10/site-packages/langchain/chains/base.py", line 288, in __call__
raise e
File "/home/ataliba/llm/lib/python3.10/site-packages/langchain/chains/base.py", line 282, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/ataliba/llm/lib/python3.10/site-packages/langchain/chains/conversational_retrieval/base.py", line 134, in _call
docs = self._get_docs(new_question, inputs, run_manager=_run_manager)
File "/home/ataliba/llm/lib/python3.10/site-packages/langchain/chains/conversational_retrieval/base.py", line 286, in _get_docs
docs = self.retriever.get_relevant_documents(
File "/home/ataliba/llm/lib/python3.10/site-packages/langchain/schema/retriever.py", line 208, in get_relevant_documents
raise e
File "/home/ataliba/llm/lib/python3.10/site-packages/langchain/schema/retriever.py", line 201, in get_relevant_documents
result = self._get_relevant_documents(
File "/home/ataliba/llm/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 571, in _get_relevant_documents
docs = self.vectorstore.max_marginal_relevance_search(
File "/home/ataliba/llm/lib/python3.10/site-packages/langchain/vectorstores/docarray/base.py", line 197, in max_marginal_relevance_search
np.array(query_embedding), docs.embedding, k=k
### Suggestion:
_No response_ | Issue: AttributeError: 'list' object has no attribute 'embedding' | https://api.github.com/repos/langchain-ai/langchain/issues/10025/comments | 2 | 2023-08-31T05:40:08Z | 2023-12-07T16:06:00Z | https://github.com/langchain-ai/langchain/issues/10025 | 1,874,819,757 | 10,025 |
[
"hwchase17",
"langchain"
]
| ### Feature request
The class ErnieBotChat defined in libs/langchain/langchain/chat_models/ernie.py only supports 2 models, which are ERNIE-Bot-turbo and ERNIE-Bot. While a bunch of new models are supported by BCE (Baidu Cloud Engine), such as llama-2-7b chat (https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Rlki1zlai). It would be better that this class supported more models.
### Motivation
When I was using ErnieBotChat, I've found that ErnieBotChat cannot recognize models other than ERNIE-Bot-turbo and ERNIE-Bot. Instead, it raised an error "Got unknown model_name {self.model_name}".
### Your contribution
If possible, I would be happy to help resolve this issue. I plan to add all models BCE (Baidu Cloud Engine) supports at this time(2023-8-31). The fixes will be simple, just add more cases around line 106 in the file libs/langchain/langchain/chat_models/ernie.py from the master branch. | Support more models for ErnieBotChat | https://api.github.com/repos/langchain-ai/langchain/issues/10022/comments | 3 | 2023-08-31T05:22:17Z | 2023-12-13T16:07:03Z | https://github.com/langchain-ai/langchain/issues/10022 | 1,874,805,344 | 10,022 |
[
"hwchase17",
"langchain"
]
| ### System Info
MacOS M2 13.4.1 (22F82)
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce behaviour:
1. Run the [tutorial](https://python.langchain.com/docs/integrations/document_loaders/youtube_audio) with the default parameters `save_dir = "~/Downloads/YouTube"`
2. After calling `docs = loader.load()` the docs will be empty
I have implemented a dummy fix for the interim.
The error is here in this file: from langchain.document_loaders.blob_loaders.youtube_audio import YoutubeAudioLoader
`YouTubeAudioLoader.yield_blobs` method
loader = FileSystemBlobLoader(self.save_dir, glob="*.m4a")
```
# This doesn't always work (MacOS)
loader = FileSystemBlobLoader(self.save_dir, glob="*.m4a")
```
The reason it doesn't work is that it's trying to use ~/Downloads/YouTube.
The fix I propose is either:
- Use the FULL file path in `save_dir` in the tutorial.
- Replace the problematic line with this, so that it finds the actual directory, even if you prefer to use `~` for specifying file paths.
```
loader = FileSystemBlobLoader(os.path.expanduser(self.save_dir), glob="*.m4a")
```
### Expected behavior
There should be documents in the loader.load() variable.
### My Fix
```
# Yield the written blobs
"""
you could fix save_dir like this...
(old)
save_dir = "~/Downloads/YouTube"
(new)
"/Users/shawnesquivel/Downloads/YouTube"
"""
# This doesn't always work (MacOS)
loader = FileSystemBlobLoader(self.save_dir, glob="*.m4a")
# This works
loader = FileSystemBlobLoader(os.path.expanduser(self.save_dir), glob="*.m4a")
```
| fix: Loading documents from a Youtube Url | https://api.github.com/repos/langchain-ai/langchain/issues/10019/comments | 1 | 2023-08-31T03:19:25Z | 2023-12-07T16:06:10Z | https://github.com/langchain-ai/langchain/issues/10019 | 1,874,719,531 | 10,019 |
[
"hwchase17",
"langchain"
]
| @dosu-bot
The issue is that the "string indices must be indices", but your transformation does not deal with that. Here is the curretn code, please see below and change to avoid this error:
```from langchain.docstore.document import Document
from typing import Dict
from langchain import PromptTemplate, SagemakerEndpoint
from langchain.llms.sagemaker_endpoint import LLMContentHandler
from langchain.chains.question_answering import load_qa_chain
import json
example_doc_1 = """
string
"""
docs = [
Document(
page_content=example_doc_1,
)
]
query = """
prompt
"""
prompt_template = """Use the following pieces of context to answer the question at the end.
{context}
Question: {question}
Answer:"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
class ContentHandler(LLMContentHandler):
content_type = "application/json"
accepts = "application/json"
def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:
input_dict = {"inputs": prompt, "parameters": model_kwargs}
return json.dumps(input_dict).encode('utf-8')
def transform_output(self, output: bytes) -> str:
response_json = json.loads(output.read().decode("utf-8"))
print("output: ", response_json[0])
return response_json[0]["generation"]
content_handler = ContentHandler()
chain = load_qa_chain(
llm=SagemakerEndpoint(
endpoint_name="endpointname",
credentials_profile_name="profilename",
region_name="us-east-1",
model_kwargs={"temperature": 1e-10, "max_length":500},
endpoint_kwargs={"CustomAttributes": "accept_eula=true"},
content_handler=content_handler,
),
prompt=PROMPT,
)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)```
_Originally posted by @maggarwal25 in https://github.com/langchain-ai/langchain/issues/10012#issuecomment-1700300547_
| @dosu-bot | https://api.github.com/repos/langchain-ai/langchain/issues/10017/comments | 7 | 2023-08-31T03:01:34Z | 2023-12-07T16:06:15Z | https://github.com/langchain-ai/langchain/issues/10017 | 1,874,707,605 | 10,017 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.277
azure-search-documents 11.4.0b8
python 3.1.0.11
### Who can help?
@baskaryan
@ruoccofabrizio
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Repro steps:
1. I took the code in https://github.com/langchain-ai/langchain/blob/master/docs/extras/integrations/vectorstores/azuresearch.ipynb and placed it in a python file and ran the chunk under "Create a new index with a Scoring profile"
2. I executed the code and got the following scores:

3. I went into the Azure Search Index and adjusted the "scoring_profile" to have a boost of 1000 instead of 100 and got the exact same scores.
### Expected behavior
I expected all of the score to be 10 times larger than the scores I got. After much experimentation I do not believe that scoring profiles would with a vector search if a search term is specified. If "None" is specified the behavior is correct. Change the last line of the example to:
`
res = vector_store.similarity_search(query="Test 1", k=3, search_type="similarity")`
And the results respect the Scoring profile and behave as expected when the scoring profile is changed. | Azure Cognitive Search Scoring Profile does not work as documented | https://api.github.com/repos/langchain-ai/langchain/issues/10015/comments | 5 | 2023-08-31T02:30:27Z | 2023-12-08T16:05:06Z | https://github.com/langchain-ai/langchain/issues/10015 | 1,874,685,238 | 10,015 |
[
"hwchase17",
"langchain"
]
| ### System Info
latest versions for python and langchain
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
I used the demo code to run Sagemaker using Langchain for llama-2-7b-f
However, I'm getting the following issues:
```
ValueError: Error raised by inference endpoint: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (424) from primary with message "{
"code":424,
"message":"prediction failure",
"error":"string indices must be integers"
}
```
This is what is shown from AWS Logs:
`[INFO ] PyProcess - W-80-model-stdout: [1,0]<stdout>:TypeError: string indices must be integers`
How do I resolve this
### Expected behavior
Shown:
```
ValueError: Error raised by inference endpoint: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (424) from primary with message "{
"code":424,
"message":"prediction failure",
"error":"string indices must be integers"
}
```
| Issues with llama-2-7b-f | https://api.github.com/repos/langchain-ai/langchain/issues/10012/comments | 11 | 2023-08-30T22:43:26Z | 2024-03-18T16:05:19Z | https://github.com/langchain-ai/langchain/issues/10012 | 1,874,476,223 | 10,012 |
[
"hwchase17",
"langchain"
]
| ### System Info
The latest langchain version
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have written a script that initializes a chat which has access to a faiss index. I want to pass in a system prompt to the conversational agent. How do I add a system prompt to the conversational agent while making sure that the chat history is passed in and not stored and updated in memory. The code below shows details
```
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.chat_models import AzureChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.vectorstores import FAISS
from dotenv import load_dotenv
from langchain.schema import SystemMessage
load_dotenv()
import os
system_message = SystemMessage(
content="You are an AI that always writes text backwards e.g. 'hello' becomes 'olleh'."
)
embeddings = OpenAIEmbeddings(deployment="embeddings",
temperature=0,
model="text-embedding-ada-002",
openai_api_type="azure",
openai_api_version="2023-03-15-preview",
openai_api_base="https://azcore-kusto-bot-openai.openai.azure.com/",
openai_api_key=os.getenv("OPENAI_API_KEY"), chunk_size = 1)
vectorstore = FAISS.load_local("faiss_index", embeddings)
prompt = "Your name is AzCoreKustoCopilot"
llm=AzureChatOpenAI(deployment_name="16k-gpt-35",
model="gpt-35-turbo-16k",
openai_api_type="azure",
openai_api_version="2023-03-15-preview",
openai_api_base="https://azcore-kusto-bot-openai.openai.azure.com/",
openai_api_key=os.getenv("OPENAI_API_KEY"))
retriever = vectorstore.as_retriever()
def initialize_chat():
# retriever = vectorstore.as_retriever()
chat = ConversationalRetrievalChain.from_llm(llm, retriever=retriever)
print('this is chat', chat)
return chat
def answer_query(chat, user_query, chat_history):
"""
user_query is the question
chat_history is a list of lists: [[prev_prev_query, prev_prev_response], [prev_query, prev_response]]
we convert this into a list of tuples
"""
chat_history_tups = [tuple(el) for el in chat_history]
print(f"user_query:{user_query}, chat_history: {chat_history_tups}")
result = chat({"question":user_query, "chat_history": chat_history_tups})
return result["answer"]
```
### Expected behavior
There is no way to pass in a system message and chat history to a ConversationalRetrievalChain. Why is that? | No support for ConversationalRetrievalChain with passing in memory and system message | https://api.github.com/repos/langchain-ai/langchain/issues/10011/comments | 6 | 2023-08-30T22:14:38Z | 2024-01-30T00:42:39Z | https://github.com/langchain-ai/langchain/issues/10011 | 1,874,448,914 | 10,011 |
[
"hwchase17",
"langchain"
]
| ### Feature request
An agent for searching iteratively multiple documents without the problem of processing incomplete document chunks.
An option to include the metadata (source references) in the prompt.
### Motivation
Normally documents are split into chunks before being added to Chroma
When the data is queried, Chroma returns these chunks of incomplete documents and feed them to the prompt.
Thus, the LLM sometimes is not provided with the complete information and will fail to answer.
This is a big problem, especially when the split occurs in the middle of a list (eg: a text listing the 10 commandments of god).
The LLM won't have a chance to know they are 10.
Besides, LangChain "stuff" retriever is just mixing all these chunks together, without even separating them nor adding the documents metadata of each chunk. Mixing different sentences could confuse the LLM.
If this can be solved using document_prompt templates, this should be added to the documentation.
I would also expect to include document souces into the prompt, so the LLM and provide the used sources (not all sources retrieved by the Chroma query).
I blieve the queries should be processed by an agent with the ability to detect when there may be missing previous and/or following chunks in order to fetch them in a subsequent iteration if required.
### Your contribution
I can help coding and testing, but I need feedback for the design and to know which other existing componentes could/should be used. | Solve the problem of working with incomplete document chunks and multiple documents | https://api.github.com/repos/langchain-ai/langchain/issues/9996/comments | 9 | 2023-08-30T14:58:47Z | 2024-02-09T02:13:41Z | https://github.com/langchain-ai/langchain/issues/9996 | 1,873,859,050 | 9,996 |
[
"hwchase17",
"langchain"
]
| ### Feature request
```
from langchain.chat_models import ChatOpenAI
from langchain.chains import GraphCypherQAChain
from langchain.graphs import Neo4jGraph
graph = Neo4jGraph(
url="bolt://localhost:7687", username="neo4j", password="pleaseletmein"
)
chain = GraphCypherQAChain.from_llm(ChatOpenAI(temperature=0), graph=graph, verbose=True)
print(chain.run("Question to the graph"))
```
In the above Code, How can I pass my custom prompt as promptTemplate? Please give me an example
### Motivation
Custom prompt support in Knowledge Graph QA
### Your contribution
Custom prompt support in Knowledge Graph QA | How to pass a custom prompt in graphQA or GraphCypherQA chain | https://api.github.com/repos/langchain-ai/langchain/issues/9993/comments | 5 | 2023-08-30T13:42:34Z | 2024-05-04T13:18:22Z | https://github.com/langchain-ai/langchain/issues/9993 | 1,873,711,576 | 9,993 |
[
"hwchase17",
"langchain"
]
| ### System Info
0.0.276
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have deployed llama2 in Azure ML endpoint and when I test it with langchain code I got a 404 Error. However using standard request library from python it works
Below : askdocuments2 ---> no lanchain
askdocuments --->langchain
same endpoint url, same key.
```
def askdocuments2(
question):
# Request data goes here
# The example below assumes JSON formatting which may be updated
# depending on the format your endpoint expects.
# More information can be found here:
# https://docs.microsoft.com/azure/machine-learning/how-to-deploy-advanced-entry-script
formatter = LlamaContentFormatter()
data = formatter.format_request_payload(prompt=question, model_kwargs={"temperature": 0.1, "max_tokens": 300})
body = data
url = 'https://llama-2-7b-test.westeurope.inference.ml.azure.com/score'
# Replace this with the primary/secondary key or AMLToken for the endpoint
api_key = ''
if not api_key:
raise Exception("A key should be provided to invoke the endpoint")
# The azureml-model-deployment header will force the request to go to a specific deployment.
# Remove this header to have the request observe the endpoint traffic rules
headers = {'Content-Type': 'application/json', 'Authorization': ('Bearer ' + api_key), 'azureml-model-deployment': 'llama'}
req = urllib.request.Request(url, body, headers)
try:
response = urllib.request.urlopen(req)
result = response.read()
decoded_data = json.loads(result.decode('utf-8'))
text = decoded_data[0]["0"]
return text
except urllib.error.HTTPError as error:
print("The request failed with status code: " + str(error.code))
# Print the headers - they include the requert ID and the timestamp, which are useful for debugging the failure
print(error.info())
print(error.read().decode("utf8", 'ignore'))
def askdocuments(
question):
try:
content_formatter = LlamaContentFormatter()
llm = AzureMLOnlineEndpoint(
endpoint_api_key="",
deployment_name="llama-2-7b-test",
endpoint_url="https://llama-2-7b-test.westeurope.inference.ml.azure.com/score",
model_kwargs={"temperature": 0.8, "max_tokens": 300},
content_formatter=content_formatter
)
formatter_template = "Write a {word_count} word essay about {topic}."
prompt = PromptTemplate(
input_variables=["word_count", "topic"], template=formatter_template
)
chain = LLMChain(llm=llm, prompt=prompt)
response = chain.run({"word_count": 100, "topic": "how to make friends"})
return response
except requests.exceptions.RequestException as e:
# Handle any requests-related errors (e.g., network issues, invalid URL)
raise ValueError(f"Error with the API request: {e}")
except json.JSONDecodeError as e:
# Handle any JSON decoding errors (e.g., invalid JSON format)
raise ValueError(f"Error decoding API response as JSON: {e}")
except Exception as e:
# Handle any other errors
raise ValueError(f"Error: {e}")
```
### Expected behavior
According to the documentation I am doing everything correctly, so not sure why its showing a 404 error in a valid url | AzureMLOnlineEndpoint not working, 404 error, but same url and api key works with standard http | https://api.github.com/repos/langchain-ai/langchain/issues/9987/comments | 5 | 2023-08-30T08:45:03Z | 2023-11-03T14:14:44Z | https://github.com/langchain-ai/langchain/issues/9987 | 1,873,221,947 | 9,987 |
[
"hwchase17",
"langchain"
]
| ### System Info
I tried to use langchain with Azure Cognitive Search as vector store and got the following Import Error.
langchain version: 0.0.276
azure documents: 11.4.0b8
python version: 3.8
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I tried to use the Azure Cognitive Search as vector store
```
model: str = "text-embedding-ada-002"
search_service = os.environ["AZURE_COGNITIVE_SEARCH_SERVICE_NAME"]
search_api_key = os.environ["AZURE_COGNITIVE_SEARCH_API_KEY"]
vector_store_address: str = f"https://{search_service}.search.windows.net"
vector_store_password: str = search_api_key
# define embedding model for calculating the embeddings
model: str = "text-embedding-ada-002"
embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)
embedding_function = embeddings.embed_query
# define schema of the json file stored on the index
fields = [
SimpleField(
name="id",
type=SearchFieldDataType.String,
key=True,
filterable=True,
),
SearchableField(
name="content",
type=SearchFieldDataType.String,
searchable=True,
),
SearchField(
name="content_vector",
type=SearchFieldDataType.Collection(SearchFieldDataType.Single),
searchable=True,
vector_search_dimensions=len(embedding_function("Text")),
vector_search_configuration="default",
),
SearchableField(
name="metadata",
type=SearchFieldDataType.String,
searchable=True,
),
]
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embedding_function,
fields=fields,
)
```
And I got the following import error
```
Cell In[17], line 72, in azure_search_by_index(question, index_name)
21 # define schema of the json file stored on the index
22 fields = [
23 SimpleField(
24 name="id",
(...)
69 ),
70 ]
---> 72 vector_store: AzureSearch = AzureSearch(
73 azure_search_endpoint=vector_store_address,
74 azure_search_key=vector_store_password,
75 index_name=index_name,
76 embedding_function=embedding_function,
77 fields=fields,
78 )
80 relevant_documentation = vector_store.similarity_search(query=question, k=1, search_type="similarity")
82 context = "\n".join([doc.page_content for doc in relevant_documentation])[:10000]
File /anaconda/envs/jupyter_env/lib/python3.8/site-packages/langchain/vectorstores/azuresearch.py:234, in AzureSearch.__init__(self, azure_search_endpoint, azure_search_key, index_name, embedding_function, search_type, semantic_configuration_name, semantic_query_language, fields, vector_search, semantic_settings, scoring_profiles, default_scoring_profile, **kwargs)
232 if "user_agent" in kwargs and kwargs["user_agent"]:
233 user_agent += " " + kwargs["user_agent"]
--> 234 self.client = _get_search_client(
235 azure_search_endpoint,
236 azure_search_key,
237 index_name,
238 semantic_configuration_name=semantic_configuration_name,
239 fields=fields,
240 vector_search=vector_search,
241 semantic_settings=semantic_settings,
242 scoring_profiles=scoring_profiles,
243 default_scoring_profile=default_scoring_profile,
244 default_fields=default_fields,
245 user_agent=user_agent,
246 )
247 self.search_type = search_type
248 self.semantic_configuration_name = semantic_configuration_name
File /anaconda/envs/jupyter_env/lib/python3.8/site-packages/langchain/vectorstores/azuresearch.py:83, in _get_search_client(endpoint, key, index_name, semantic_configuration_name, fields, vector_search, semantic_settings, scoring_profiles, default_scoring_profile, default_fields, user_agent)
81 from azure.search.documents import SearchClient
82 from azure.search.documents.indexes import SearchIndexClient
---> 83 from azure.search.documents.indexes.models import (
84 HnswVectorSearchAlgorithmConfiguration,
85 PrioritizedFields,
86 SearchIndex,
87 SemanticConfiguration,
88 SemanticField,
89 SemanticSettings,
90 VectorSearch,
91 )
93 default_fields = default_fields or []
94 if key is None:
ImportError: cannot import name 'HnswVectorSearchAlgorithmConfiguration' from 'azure.search.documents.indexes.models' (/anaconda/envs/jupyter_env/lib/python3.8/site-packages/azure/search/documents/indexes/models/__init__.py)
```
### Expected behavior
No import error | ImportError: cannot import name 'HnswVectorSearchAlgorithmConfiguration' from 'azure.search.documents.indexes.models' | https://api.github.com/repos/langchain-ai/langchain/issues/9985/comments | 9 | 2023-08-30T08:07:19Z | 2024-05-07T16:04:48Z | https://github.com/langchain-ai/langchain/issues/9985 | 1,873,154,024 | 9,985 |
[
"hwchase17",
"langchain"
]
| ### System Info
```
import boto3
from langchain.retrievers import AmazonKendraRetriever
retriever = AmazonKendraRetriever(index_id="xxxxxxxxxxxxxxxxxxxx")
retriever.get_relevant_documents("what is the tax")
```
**Facing the below error**
----------------------------------------
```
AttributeError Traceback (most recent call last)
Cell In[48], line 9
5 # retriever = AmazonKendraRetriever(kendraindex='dfba3dce-b6eb-4fec-b98c-abe17a58cf30',
6 # awsregion='us-east-1',
7 # return_source_documents=True)
8 retriever = AmazonKendraRetriever(index_id="7835d77a-470b-4545-9613-508ed8fe82d3")
----> 9 retriever.get_relevant_documents("What is dog")
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/schema/retriever.py:208, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, **kwargs)
206 except Exception as e:
207 run_manager.on_retriever_error(e)
--> 208 raise e
209 else:
210 run_manager.on_retriever_end(
211 result,
212 **kwargs,
213 )
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/schema/retriever.py:201, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, **kwargs)
199 _kwargs = kwargs if self._expects_other_args else {}
200 if self._new_arg_supported:
--> 201 result = self._get_relevant_documents(
202 query, run_manager=run_manager, **_kwargs
203 )
204 else:
205 result = self._get_relevant_documents(query, **_kwargs)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/retrievers/kendra.py:421, in AmazonKendraRetriever._get_relevant_documents(self, query, run_manager)
407 def _get_relevant_documents(
408 self,
409 query: str,
410 *,
411 run_manager: CallbackManagerForRetrieverRun,
412 ) -> List[Document]:
413 """Run search on Kendra index and get top k documents
414
415 Example:
(...)
419
420 """
--> 421 result_items = self._kendra_query(query)
422 top_k_docs = self._get_top_k_docs(result_items)
423 return top_k_docs
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/retrievers/kendra.py:390, in AmazonKendraRetriever._kendra_query(self, query)
387 if self.user_context is not None:
388 kendra_kwargs["UserContext"] = self.user_context
--> 390 response = self.client.retrieve(**kendra_kwargs)
391 r_result = RetrieveResult.parse_obj(response)
392 if r_result.ResultItems:
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/botocore/client.py:875, in BaseClient._getattr_(self, item)
872 if event_response is not None:
873 return event_response
--> 875 raise AttributeError(
876 f"'{self._class.name_}' object has no attribute '{item}'"
877 )
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create an kendra index and then use it to run the below code.
import boto3
from langchain.retrievers import AmazonKendraRetriever
retriever = AmazonKendraRetriever(index_id="xxxxxxxxxxxxxxxxxxxx")
retriever.get_relevant_documents("what is the tax")
### Expected behavior
Fetch the document from the index and retrun it like it was happening in previous version. | AttributeError: 'kendra' object has no attribute 'retrieve' | https://api.github.com/repos/langchain-ai/langchain/issues/9982/comments | 5 | 2023-08-30T07:14:43Z | 2024-01-26T18:57:38Z | https://github.com/langchain-ai/langchain/issues/9982 | 1,873,068,703 | 9,982 |
[
"hwchase17",
"langchain"
]
| ### System Info
First, Thank you so much for your work on Langchain, it's very good.
I am trying to compare two documents following the guide from langchain https://python.langchain.com/docs/integrations/toolkits/document_comparison_toolkit
I have done exactly the same code:
I have one class to use for the args_schema on the tools creation:
```
class DocumentInput(BaseModel):
question: str = Field()
```
I have created the tools :
```
tools.append(
Tool(
args_schema=DocumentInput,
name=file_name,
description=f"useful when you want to answer questions about {file_name}",
func=RetrievalQA.from_chain_type(llm=llm, retriever=retriever)
)
)
```
```
agent = initialize_agent(
agent=AgentType.OPENAI_FUNCTIONS,
tools=tools,
llm=llm,
verbose=True,
)
```
And here I am getting the error:
"1 validation error for Tool\nargs_schema\n subclass of BaseModel expected (type=type_error.subclass; expected_class=BaseModel)",
I have changed the args_schema class to :
```
from abc import ABC
from langchain.tools import BaseTool
from pydantic import Field
class DocumentInput(BaseTool, ABC):
question: str = Field()
```
And now the error, I am getting is:
("Value not declarable with JSON Schema, field: name='_callbacks_List[langchain.callbacks.base.BaseCallbackHandler]' type=BaseCallbackHandler required=True",)
I only want to compare the content between two documents, Do you have some example to compare two files which is working? Maybe I am calling wrong the creation of the tools.
### Who can help?
@yajunDai
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have follow exactly the guide for ->https://python.langchain.com/docs/integrations/toolkits/document_comparison_toolkit to compare two documents, and I am getting the error :
"1 validation error for Tool\nargs_schema\n subclass of BaseModel expected (type=type_error.subclass; expected_class=BaseModel)",
The args_schema class is :
```
class DocumentInput(BaseModel):
question: str = Field()
```
I trying to create the tools :
```
tools.append(
Tool(
args_schema=DocumentInput,
name=file_name,
description=f"useful when you want to answer questions about {file_name}",
func=RetrievalQA.from_chain_type(llm=llm, retriever=retriever)
)
)
```
and here I am getting the error:
"1 validation error for Tool\nargs_schema\n subclass of BaseModel expected (type=type_error.subclass; expected_class=BaseModel)",
After that I am doing this, but I can't get to initialize_agent
```
agent = initialize_agent(
agent=AgentType.OPENAI_FUNCTIONS,
tools=tools,
llm=llm,
verbose=True,
)
```
It's exactly the guide for the Document comparsion on Langchain
### Expected behavior
Expected behaviour, that I can compare the content for two documents without error | Document Comparison toolkit is not working | https://api.github.com/repos/langchain-ai/langchain/issues/9981/comments | 15 | 2023-08-30T06:02:22Z | 2024-06-22T16:34:20Z | https://github.com/langchain-ai/langchain/issues/9981 | 1,872,965,623 | 9,981 |
[
"hwchase17",
"langchain"
]
| ### Feature request
When using Chroma vector store, the stored documents can only be retrieved when using a search query. There is no method which allows to load all documents.
### Motivation
The closest thing to retrieving all documents is using
`vectordb._collection.get()` for which the output is a dictionary instead of document. This prevents filtering based on other retrievers which Chroma does not support like TF-IDF and BM25.
### Your contribution
Not yet. | Chroma - retrieve all documents | https://api.github.com/repos/langchain-ai/langchain/issues/9980/comments | 2 | 2023-08-30T05:44:19Z | 2024-05-15T05:58:55Z | https://github.com/langchain-ai/langchain/issues/9980 | 1,872,945,320 | 9,980 |
[
"hwchase17",
"langchain"
]
| ### System Info
colab notebook
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
getting error: ---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-25-74e62d788fe7> in <cell line: 9>()
7 import openai
8 from langchain.chains import LLMBashChain, LLMChain, RetrievalQA, SimpleSequentialChain
----> 9 from langchain.chains.summarize import load_summarize_chain
10 from langchain.chat_models import ChatOpenAI
11 from langchain.docstore.document import Document
/usr/local/lib/python3.10/dist-packages/langchain/chains/summarize/init.py in <module>
9 from langchain.chains.summarize import map_reduce_prompt, refine_prompts, stuff_prompt
10 from langchain.prompts.base import BasePromptTemplate
---> 11 from langchain.schema import BaseLanguageModel
12
13
ImportError: cannot import name 'BaseLanguageModel' from 'langchain.schema' (/usr/local/lib/python3.10/dist-packages/langchain/schema/init.py)
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
```
```
https://colab.research.google.com/drive/1xH7coRd2AnZFdejGQ2nyJWuNrTNMdvSa?usp=sharing
```
### Expected behavior
basic issues running the libraries | mportError: cannot import name 'BaseLanguageModel' from 'langchain.schema' (/usr/local/lib/python3.10/dist-packages/langchain/schema/init.py) | https://api.github.com/repos/langchain-ai/langchain/issues/9977/comments | 2 | 2023-08-30T04:54:31Z | 2023-12-06T17:43:10Z | https://github.com/langchain-ai/langchain/issues/9977 | 1,872,899,721 | 9,977 |
[
"hwchase17",
"langchain"
]
| ### Feature request
The generative Agents demo shows the generic individual conversation.
There are 2 features that I need clarity on. If these exist (if yes how to implement the same) or need to be worked on:
1. Tool integration of the agents. If the agents can integrate to some tool like a calendar or a clock to get the real time actions.
2. The agents wait for the response or the action to be completed by the other agent. Currently the agents reply to the response which is mostly a text response. The requirement here being is that the agents wait for the other agent's action to be completed which is non textual.
### Motivation
The motivation here is the autonomous world simulation that hoes beyond the textual conversation between the agents.
Tool integration can bring in more realistic solutions to come into picture.
### Your contribution
I have read the paper and know the approach.
I can contribute on the development of the feature of that does not exist yet. | Generative Agents in LangChain Tool Integration | https://api.github.com/repos/langchain-ai/langchain/issues/9976/comments | 14 | 2023-08-30T04:43:59Z | 2023-12-14T16:06:18Z | https://github.com/langchain-ai/langchain/issues/9976 | 1,872,890,614 | 9,976 |
[
"hwchase17",
"langchain"
]
| ### Feature request
GCS blobs can have custom metadata defined either in google console or programatic way as shown below
```
from google.cloud import storage
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(destination_blob_name)
metadata = {'from_source': 'https://localhost', 'genre': 'sci-fi'}
blob.metadata = metadata
blob.upload_from_filename("/home/jupyter/svc-mlp-staging.json", if_generation_match=0)
```
GCSFileLoader can get blob's metadata if present and populate the same to document's metadata before returning docs.
### Motivation
1. This feature can help load the documents with the required custom metadata.
2. Splits (based on splitter) and vector embeddings of splits will be identified by custom metadata in vector store
### Your contribution
Interested in taking this up | GCSFileLoader need to read blob's metadata and populate it to documents metadata | https://api.github.com/repos/langchain-ai/langchain/issues/9975/comments | 4 | 2023-08-30T03:58:52Z | 2024-02-11T16:15:26Z | https://github.com/langchain-ai/langchain/issues/9975 | 1,872,845,768 | 9,975 |
[
"hwchase17",
"langchain"
]
| ### System Info
Platform: MacOS 13.5
Python Version: 3.10.11
Langchain: 0.0.257
Azure Search: 1.0.0b2
Azure Search Documents: 11.4.0b6
Openai: 0.27.8
Issue: AttributeError: module 'azure.search.documents.indexes.models._edm' has no attribute 'Single'
Code: `az_search = AzureSearch(azure_search_endpoint=os.getenv('AZURE_COGNITIVE_SEARCH_SERVICE_NAME'),
azure_search_key=os.getenv('AZURE_COGNITIVE_SEARCH_API_KEY'),
index_name=os.getenv('AZURE_COGNITIVE_SEARCH_INDEX_NAME'),
embedding_function=embeddings.embed_query)`
I read from another post here: [https://github.com/langchain-ai/langchain/issues/8917](url) that this is caused by a version miss match. So I downgraded my Azure search-documents package version to 11.4.0b6. But the same error occurred.
I've also tried using langchin==0.0.245 or langchain==0.0.247 but didn't solve the issue.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
os.environ["AZURE_COGNITIVE_SEARCH_INDEX_NAME"]="vectordb001index"
embeddings = OpenAIEmbeddings(deployment_id="vectorDBembedding", chunk_size=1)
az_search = AzureSearch(azure_search_endpoint=os.getenv('AZURE_COGNITIVE_SEARCH_SERVICE_NAME'),
azure_search_key=os.getenv('AZURE_COGNITIVE_SEARCH_API_KEY'),
index_name=os.getenv('AZURE_COGNITIVE_SEARCH_INDEX_NAME'),
embedding_function=embeddings.embed_query)
```
**Error Output**
```
File [~/Desktop/.../azuresearch.py:221](https://file+.vscode-resource.vscode-cdn.net/.../azuresearch.py:221), in AzureSearch.__init__(self, azure_search_endpoint, azure_search_key, index_name, embedding_function, search_type, semantic_configuration_name, semantic_query_language, fields, vector_search, semantic_settings, scoring_profiles, default_scoring_profile, **kwargs)
206 # Initialize base class
207 self.embedding_function = embedding_function
208 default_fields = [
209 SimpleField(
210 name=FIELDS_ID,
211 type=SearchFieldDataType.String,
212 key=True,
213 filterable=True,
214 ),
215 SearchableField(
216 name=FIELDS_CONTENT,
217 type=SearchFieldDataType.String,
218 ),
...
230 ]
231 user_agent = "langchain"
232 if "user_agent" in kwargs and kwargs["user_agent"]:
AttributeError: module 'azure.search.documents.indexes.models._edm' has no attribute 'Single'
```
### Expected behavior
Program running with no error interruption | azure.search.documents.indexes.models._edm no attribute "Single" under Langchain.AzureSearch() | https://api.github.com/repos/langchain-ai/langchain/issues/9973/comments | 5 | 2023-08-30T03:14:22Z | 2024-02-14T16:11:03Z | https://github.com/langchain-ai/langchain/issues/9973 | 1,872,811,739 | 9,973 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python 3.11.4
langchain-0.0.276
currently troubleshooting on a Windows 11 workstation in a notebook in VSCode.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm trying to use a structured output chain with some defined Pydantic classes. The chain works when using ChatOpenAI model but not AzureChatOpenAI. I'm able to generate non-chained chat completions with AzureChatOpenAI, so I'm pretty confident the issue isn't with my configuration of the AzureChatOpenAI model.
Representative code:
```
# ChatOpenAI using native OpenAI endpoint
openai_model = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
# AzureChatOpenAI using private endpoint through Azure OpenAI Service
azure_model = AzureChatOpenAI(
openai_api_base="[hiding my api base]",
openai_api_version="2023-07-01-preview",
deployment_name="[hiding my model deployment name]",
model_name = "[hiding my model name]",
openai_api_key="[hiding my API key]",
openai_api_type="azure",
temperature = 1
)
# define representative singleton class
class GeneratedItem(BaseModel):
"""Information about a generated item."""
item_nickname: str = Field(..., description="A creative nickname for an item that might be found in some room")
item_purpose: str = Field(..., description="The purpose of the item")
# define the plural class as a Sequence of the above singletons
class GeneratedItems(BaseModel):
"""information about all items generated"""
items: Sequence[GeneratedItem] = Field(..., description="A sequence of items")
# define messages for a prompt
prompt_msgs = [
SystemMessage(
content="You are a world class algorithm for generating information in structured formats."
),
HumanMessage(
content="Use the given format to generate 2 items that might be in a room described as follows"
),
HumanMessagePromptTemplate.from_template("{input}"),
HumanMessage(content="Tips: Make sure to answer in the correct format"),
]
# define the prompt using ChatPromptTemplate
prompt = ChatPromptTemplate(messages=prompt_msgs)
# define and execute structured output chain with ChatOpenAI model
chain1 = create_structured_output_chain(GeneratedItems, openai_model, prompt, verbose=False)
chain1.run("A living room with green chairs and a wooden coffee table")
# define and execute structued output chain with AzureChatOpenAI model
chain2 = create_structured_output_chain(GeneratedItems, azure_model, prompt, verbose=False)
chain2.run("A living room with green chairs and a wooden coffee table")
```
### Expected behavior
In the above code, the `chain1.run()` using ChatOpenAI executes successfully returning something like the below:
{'items': [{'item_nickname': 'green chairs', 'item_purpose': 'seating'},
{'item_nickname': 'wooden coffee table', 'item_purpose': 'surface'}]}
However, when executing `chain2.run()` which uses AzureChatOpenAI, this behavior is not replicated. Instead, I receive the below errors (including full traceback to help troubleshoot)
```
KeyError Traceback (most recent call last)
Cell In[23], line 55
50 # chain1 = create_structured_output_chain(GeneratedItems, openai_model, prompt, verbose=False)
51 # chain1.run("A living room with green chairs and a wooden coffee table")
54 chain2 = create_structured_output_chain(GeneratedItems, azure_model, prompt, verbose=False)
---> 55 chain2.run("A living room with green chairs and a wooden coffee table")
File ...\.venv\Lib\site-packages\langchain\chains\base.py:441, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
439 if len(args) != 1:
440 raise ValueError("`run` supports only one positional argument.")
--> 441 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
442 _output_key
443 ]
445 if kwargs and not args:
446 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
447 _output_key
448 ]
File ...\.venv\Lib\site-packages\langchain\chains\base.py:244, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
242 except (KeyboardInterrupt, Exception) as e:
243 run_manager.on_chain_error(e)
--> 244 raise e
245 run_manager.on_chain_end(outputs)
246 final_outputs: Dict[str, Any] = self.prep_outputs(
...
--> 103 content = _dict["content"] or "" # OpenAI returns None for tool invocations
104 if _dict.get("function_call"):
105 additional_kwargs = {"function_call": dict(_dict["function_call"])}
KeyError: 'content'
```
Ideally this would return structured output for AzureChatOpenAI model in exactly the same manner as it does when using a ChatOpenAI model. Maybe I missed something in the docs, but thinking this is a source-side issue with AzureChatOpenAI not containing/creating the `content` key in the `_dict` dictionary.
Please let me know if there are any workarounds or solutions I should attempt and/or some documentation I may not have found. | create_structured_output_chain does not work with with AzureChatOpenAI model? | https://api.github.com/repos/langchain-ai/langchain/issues/9972/comments | 3 | 2023-08-30T02:28:18Z | 2024-02-12T16:14:40Z | https://github.com/langchain-ai/langchain/issues/9972 | 1,872,777,987 | 9,972 |
[
"hwchase17",
"langchain"
]
| ### System Info
**### I´m directly providing these arguments to the LLMs via the prompt:**
**SOURCE CODE:**
from langchain import OpenAI, SQLDatabase
from langchain_experimental.sql import SQLDatabaseChain
from langchain.chains import create_sql_query_chain
from langchain.prompts.prompt import PromptTemplate
import environ
env = environ.Env()
environ.Env.read_env()
API_KEY = env('OPENAI_API_KEY')
db = SQLDatabase.from_uri(
f"postgresql+psycopg2://postgres:{env('DBPASS')}@localhost:5432/{env('DATABASE')}",
)
llm = OpenAI(temperature=0, openai_api_key=API_KEY)
_DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.
Use the following format:
Question:"Question here"
SQLQuery:"SQL Query to run"
SQLResult:"Result of the SQLQuery"
Answer:"Final answer here"
Only use the following tables:
{table_info}
If someone asks for the table foobar, they really mean the tasks table.
Question: {input}"""
PROMPT = PromptTemplate(
input_variables=["input", "table_info", "dialect"], template=_DEFAULT_TEMPLATE
)
db_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT, verbose=True, use_query_checker=True)
def get_prompt():
print("Digite 'exit' para sair")
while True:
prompt = input("Entre c/ uma pergunta (prompt): ")
if prompt.lower() == 'exit':
print('Saindo...')
break
else:
try:
result = db_chain(prompt)
print(result)
except Exception as e:
print(e)
get_prompt()
**BUT THE RESULT IS WITH [ ] AND NUMBERS:**
←[1m> Entering new SQLDatabaseChain chain...←[0m
how many tasks do we have?
SQLQuery:←[32;1m←[1;3mSELECT COUNT(*) FROM tasks;←[0m
SQLResult: ←[33;1m←[1;3m[(6,)]←[0m
Answer:←[32;1m←[1;3mWe have 6 tasks.←[0m
←[1m> Finished chain.←[0m
**HOW CAN I FIX IT?**
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I need the answes without these numbers: [32;1m←[1;3m
### Expected behavior
I need the answes without these numbers: [32;1m←[1;3m | SQL Chain Result - Error | https://api.github.com/repos/langchain-ai/langchain/issues/9959/comments | 4 | 2023-08-29T21:27:00Z | 2023-08-30T19:39:38Z | https://github.com/langchain-ai/langchain/issues/9959 | 1,872,502,344 | 9,959 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Weaviate introduced multi-tenancy support in version 1.20
https://weaviate.io/blog/multi-tenancy-vector-search
### Motivation
This can help users using Langchain + Weaviate at scale, ingesting documents and attaching tenants to it.
### Your contribution
I have implemented, but would need some help to check if everything is ok and in accordance with LangChain.
Also, I would like help on the as_retriver, as I was not able to implement multitenant on it, Yet.
the code is living here: https://github.com/dudanogueira/langchain/tree/weaviate-multitenant | Multi Tenant Support for Weaviate | https://api.github.com/repos/langchain-ai/langchain/issues/9956/comments | 4 | 2023-08-29T20:56:22Z | 2024-03-13T19:56:50Z | https://github.com/langchain-ai/langchain/issues/9956 | 1,872,444,158 | 9,956 |
[
"hwchase17",
"langchain"
]
| ### System Info
Getting error: got multiple values for keyword argument- question_generator .
return cls(\nTypeError: langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain() got multiple values for keyword argument \'question_generator\'', 'SystemError'
`Qtemplate = (
"Combine the chat history and follow up question into "
"a standalone question. Chat History: {chat_history}"
"Follow up question: {question} withoud changing the real meaning of the question itself."
)
CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(Qtemplate)
question_generator_chain = LLMChain(llm=OpenAI(openai_api_key=openai.api_key), prompt=CONDENSE_QUESTION_PROMPT)
chain = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=self.vector_store.as_retriever(),
combine_docs_chain_kwargs=chain_type_kwargs,
verbose=True,
return_source_documents=True,
get_chat_history=lambda h: h,
memory=window_memory,
question_generator=question_generator_chain
)`
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduct the behaviour:
1. Generate standalone question, which does not change the meaning of question, if it is changing meaning of question, keep question as it is.
2. Generate output using memory and get most accurate answer.
### Expected behavior
Expecting the right code for implementing same functionality. | ConversationalRetrievalChain [got multiple argument for question_generator] | https://api.github.com/repos/langchain-ai/langchain/issues/9952/comments | 5 | 2023-08-29T20:03:57Z | 2024-02-13T16:13:02Z | https://github.com/langchain-ai/langchain/issues/9952 | 1,872,339,935 | 9,952 |
[
"hwchase17",
"langchain"
]
| ### System Info
I have a tool with one required argument _chart_data_ and one optional argument _chart_title_. The tool is defined using the BaseModel class from Pydantic and is decorated with @tool("charts", args_schema=ChartInput).
However, optional arguments are pushed into the 'required' list that is being passed to OpenAI.
Do you have any suggestions for resolving this issue? GPT-3.5 consistently prompts for the chart_title argument, even though it's supposed to be optional.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Here's the code snippet:
langchain==0.0.274
```
class ChartInput(BaseModel):
chart_data: list = Field(..., description="Data for chart")
chart_title: Optional[str] = Field(None, description="The title for the chart.")
@tool("charts", args_schema=ChartInput)
def charts_tool(chart_data: list, chart_title: Optional[str]=None):
'''useful when creating charts'''
return 'chart image url'
```
### Expected behavior
When printing out the output of: format_tool_to_openai_function(charts_tool) , I see can chart_title which is an optional argument is pushed into the required args list.
`{'name': 'charts', 'description': 'charts(chart_data: list, chart_title: Optional[str] = None) - useful when creating charts', 'parameters': {'type': 'object', 'properties': {'chart_data': {'title': 'Chart Data', 'description': 'data for chart', 'type': 'array', 'items': {}}, 'chart_title': {'title': 'Chart Title', 'description': 'The title for the chart.', 'type': 'string'}}, 'required': ['chart_data', 'chart_title']}}` | Optional Arguments Treated as Required by "format_tool_to_openai_function" | https://api.github.com/repos/langchain-ai/langchain/issues/9942/comments | 3 | 2023-08-29T16:29:41Z | 2023-12-06T17:43:15Z | https://github.com/langchain-ai/langchain/issues/9942 | 1,872,021,415 | 9,942 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I want to request functionality for decision tree / flow chart like prompt architecture. The idea is that there would be a Prompt Tree that starts on a specific branch then allows the LLM to select new branches as part of its toolkit. Each branch would have its own prompts meaning that the AI does not need to be given all the information up front and instead can break down its commands into bite sized chunks that it sees at each branch in the tree.
### Motivation
This would help chatbot workflows by limiting the amount of information the LLM sees at each point in time, it can collect variables through different branches of the tree that it will use later, and it would improve reliability with LLM outputs because it would be easier to implement checks. It also could eliminate the need for a scratchpad, which can become costly if abused by the LLM.
Also, this is a feature that is available in other systems such as [LLMFlows](https://github.com/stoyan-stoyanov/llmflows) and [Amazon Lex](https://aws.amazon.com/lex/). And from what I have seen it is frequently on message boards here.
### Your contribution
I have made a simple example script to show how this could work in principle. However, I do not have experience contributing to open source projects so I am not sure what formatting mistakes I may be making, nor where exactly in the object heirarchy this should belong (is this a type of Prompt? Or Agent?). I would love to learn about what is needed to incorporate this into the LangChain functionality.
In my example I make a PromptTree class which stores the state and can access the current prompt. Inside the tree are a variety of branches which point to eachother according to a dictionary. Each branch produces a tool which allows the AI to switch branches by updating the prompttree.
```python
# Import libraries
import ast
from pydantic.v1 import BaseModel, Field
from langchain.tools import Tool
from langchain.schema import HumanMessage, AIMessage, SystemMessage, FunctionMessage
from langchain.tools import format_tool_to_openai_function
from langchain.chat_models import ChatOpenAI
### Define PromptBranch ###
# Declare function name variable
SELECT_BRANCH = 'select_branch'
UPDATE_INSIGHT = 'update_insight'
# Create PromptTreeBranch class
class PromptBranch:
"""A branch in the PromptTree."""
# Declare PromptBranch variables
description = None # Default description of the branch
header = None # Header prompt
footer = None # Footer prompt
children = {} # Dictionary of children branches with descriptions. Format={name: description (None for default)}
initial_state = {} # Initial state of the branch
pass_info = {} # Additional info to be passed to children
insights = {} # Dictionary of insights that the AI can update. Format={name: description}
# Get branch ID
@property
def branch_id(self):
"""Get the branch ID."""
return type(self).__name__
def __init__(self, parent, **kwargs):
"""Initialize the PromptBranch."""
self.parent = parent
self.initialize_state(**kwargs)
return
def initialize_state(self, **kwargs):
"""Initialize the branch state."""
# We allow kwargs to be passed in case the branch needs to be initialized with additional info
self.state = {
**self.initial_state,
'insights': {x: None for x in self.insights.keys()} # Initialize insights to None
}
return
def __call__(self, messages):
"""Call the PromptBranch."""
return (
self.get_prompt(messages),
self.get_tools(),
)
def get_pass_info(self):
"""Pass info to children."""
return self.pass_info
def get_prompt(self, messages):
"""Get the prompt."""
# Initialze prompt
prompt = []
# Add preamble
preamble = self.parent.preamble
if preamble is not None:
prompt.append(SystemMessage(content=preamble))
# Add header
header = self.get_header()
if header is not None:
prompt.append(SystemMessage(content=header))
# Add messages
prompt += messages
# Add footer
footer = self.get_footer()
if footer is not None:
prompt.append(SystemMessage(content=footer))
# Add insights
insights = self.get_insights()
if insights is not None:
prompt.append(SystemMessage(content=insights))
# Return
return prompt
def get_header(self):
"""Get header."""
return self.header
def get_footer(self):
"""Get footer."""
return self.footer
def get_insights(self):
"""Get insights."""
if len(self.insights) == 0:
return None
else:
insights = f"Your insights so far are:"
for name, state in self.state['insights'].items():
insights += f"\n{name}: {state}"
return insights
def get_tools(self):
"""Get tools."""
# Initialize tools
tools = []
# Add switch branch tool
if len(self.children) > 0:
tools.append(self._tool_switch_branch())
# Add update insights tool
if len(self.insights) > 0:
tools.append(self._tool_update_insight())
# Return
return tools
def _tool_switch_branch(self):
"""Create tool to select next branch."""
# Get variables
tool_name = SELECT_BRANCH
children = self.children
# Create tool function
tool_func = self.switch_branch
# Create tool description
tool_description = "Select the next branch to continue the conversation. Your options are:"
for branch_id, branch_description in children.items():
if branch_description is None:
branch_description = self.parent.all_branches[branch_id].description
tool_description += f"\n{branch_id}: {branch_description}"
# Create tool schema
class ToolSchema(BaseModel):
branch: str = Field(
description="Select next branch.",
enum=list(children.keys()),
)
# Create tool
tool_obj = Tool(
name=tool_name,
func=tool_func,
description=tool_description,
args_schema=ToolSchema,
)
# Return
return tool_obj
def _tool_update_insight(self):
"""Create tool to update an insight."""
# Get variables
tool_name = UPDATE_INSIGHT
insights = self.insights
# Create tool function
tool_func = self.update_insight
# Create tool description
tool_description = "Update an insight. You can choose to update any of the following insights:"
for name, state in insights.items():
tool_description += f"\n{name}: {state}"
# Create tool schema
class ToolSchema(BaseModel):
insight: str = Field(
description="Select insight to update.",
enum=list(insights.keys()),
)
value: str = Field(
description="New value of the insight.",
)
# Create tool
tool_obj = Tool(
name=tool_name,
func=tool_func,
description=tool_description,
args_schema=ToolSchema,
)
# Return
return tool_obj
def switch_branch(self, branch):
"""Switch to a new branch."""
# Switch parent tree branch
self.parent.branch = self.parent.all_branches[branch](parent=self.parent, **self.get_pass_info())
# Return function message
message = FunctionMessage(
name=SELECT_BRANCH,
content=f"You have switched to the {branch} branch.",
additional_kwargs={'internal_function': True},
)
return message
def update_insight(self, insight, value):
"""Update an insight."""
# Update insight
self.state['insights'][insight] = value
# Return function message
message = FunctionMessage(
name=UPDATE_INSIGHT,
content=f"You have updated the {insight} insight to {value}.",
additional_kwargs={'internal_function': True},
)
return message
### Define PromptTree ###
# Create PromptTree class
class PromptTree:
"""A decision tree for prompting the AI."""
# Declare PromptTree variables
preamble = None # System prompt to put before each branch prompt
first_branch = None # Name of first branch to start the prompt tree
all_branches = {} # Dictionary of all branches in the tree. Format={branch_id: branch_class}
def __init__(self):
"""Initialize the PromptTree branch state."""
self.branch = self.all_branches[self.first_branch](parent=self)
return
def __call__(self, messages, **kwargs):
"""Call the PromptTree."""
return self.branch(messages, **kwargs)
def get_state(self):
"""Get the current branch state."""
return {
'branch_id': self.branch.branch_id,
'branch_state': self.branch.state,
}
def load_state(self, state):
"""Load a branch from the state."""
branch_id = state['branch_id']
branch_state = state['branch_state']
if branch_id not in self.all_branches:
raise ValueError(f"Unknown branch_id: {branch_id}")
self.branch = self.all_branches[branch_id](parent=self)
self.branch.state = branch_state
return
### Define TreeAgent ###
# Create TreeAgent class
class TreeAgent:
"""An AI agent based on the PromptTree class."""
def __init__(self, tree, model):
"""Initialize the TreeAgent."""
self.tree = tree
self.model = model
return
def __call__(self, messages, **kwargs):
"""Call the TreeAgent."""
return self.respond(messages, **kwargs)
def get_state(self):
"""Get the current state of the TreeAgent."""
return self.tree.get_state()
def load_state(self, state):
"""Load the state of the TreeAgent."""
self.tree.load_state(state)
return
def respond(self, messages):
"""Respond to the messages."""
# Initialize new messages
new_messages = []
# Loop until no function calls
while True:
# Get the prompt
prompt, tools = self.tree(messages+new_messages)
# Get the response
funcs = [format_tool_to_openai_function(t) for t in tools]
response = self.model.predict_messages(prompt, functions=funcs)
new_messages.append(response)
# Check for function calls
if 'function_call' in new_messages[-1].additional_kwargs:
# Get function call
func_call = new_messages[-1].additional_kwargs['function_call']
func_name = func_call['name']
func_args = ast.literal_eval(func_call['arguments'])
func = [x.func for x in tools if x.name == func_name][0]
# Call the function
func_response = func(**func_args)
new_messages.append(func_response)
continue
else:
# If no function call, break
break
# Return
return new_messages
####################################################################################################
####################################################################################################
### EXAMPLE ###
# Create PromptBranches
class BranchA(PromptBranch):
header = "You love icecream, but you only like vanilla icecream."
footer = "If you choose to respond make sure you mention icecream."
description = "A Branch to talk about icecream."
children = {
'BranchB': 'If someone mentions anything fancy, be sure to switch to this branch.',
'BranchC': None,
}
class BranchB(PromptBranch):
header = "You love fine wines, but only if they are over 10 years old."
footer = "If you choose to respond make sure you mention wine."
description = "A Branch to talk about wine."
children = {
'BranchA': None,
'BranchC': None,
}
class BranchC(PromptBranch):
header = "You love going to the beach all the time no matter what."
footer = "If you choose to respond make sure you mention that you love the beach."
description = "A Branch to talk about the beach."
children = {
'BranchA': None,
'BranchB': None,
}
# Create PromptTree
class MyPromptTree(PromptTree):
preamble = "You are an AI who is obsessed with a few things."
first_branch = 'BranchA'
all_branches = {
'BranchA': BranchA,
'BranchB': BranchB,
'BranchC': BranchC,
}
### CONVERSATION ###
# Initialize the AI
llm = ChatOpenAI(model="gpt-3.5-turbo-0613")
tree = MyPromptTree()
agent = TreeAgent(tree, llm)
# Create sample conversation
messages = []
while True:
# Human input
user_message = input("You: ")
messages += [HumanMessage(content=user_message)]
# AI response
new_messages = agent(messages)
for m in new_messages:
print("AI:", m)
messages += new_messages
```
While this may not be a perfect way to go about things, it does demonstrate that with a relatively small amount of code we can work with existing LangChain arcitecture to implement a toy model. I think that with a little bit of work this could be made into something very useful.
I would love to learn more about if/how I can help contribute to incorporate this. | Functionality for prompts based on decision tree / flow charts. | https://api.github.com/repos/langchain-ai/langchain/issues/9932/comments | 13 | 2023-08-29T15:02:20Z | 2024-06-19T14:40:49Z | https://github.com/langchain-ai/langchain/issues/9932 | 1,871,868,517 | 9,932 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I wanted to use the new `ParentDocumentRetriever` but found the indexing extremely slow. I think the reason
is this line here https://github.com/langchain-ai/langchain/blob/e80834d783c6306a68df54e6251d9fc307aee87c/libs/langchain/langchain/retrievers/parent_document_retriever.py#L112
the `add_documents` for FAISS calls the `embed_texts` function in a list comprehension here https://github.com/langchain-ai/langchain/blob/e80834d783c6306a68df54e6251d9fc307aee87c/libs/langchain/langchain/vectorstores/faiss.py#L166
This only embeds a single chunk of text at a time which is really slow, especially when using OpenAIEmbeddings. Replacing this with a call to OpenAIEmbeddings.embed_documents(docs) would result in a huge speedup as it batches things up per API call (default batch size of a 1000).
I replaced the `self.vectorstore.add_documents(docs)` with
```python
texts = [doc.page_content for doc in docs]
metadatas = [doc.metadata for doc in docs]
embeddings = OpenAIEmbeddings().embed_documents([doc.page_content for doc in docs])
self.vectorstore._FAISS__add(texts, embeddings, metadatas)
```
But a more general solution would be needed where because on initialisation only the `embed_function` is stored and not the underlying embedding model itself.
### Suggestion:
_No response_ | Issue: ParentDocumentRetriever is slow with FAISS because add_documents uses embed_query without batching | https://api.github.com/repos/langchain-ai/langchain/issues/9929/comments | 9 | 2023-08-29T14:04:47Z | 2024-04-08T06:28:03Z | https://github.com/langchain-ai/langchain/issues/9929 | 1,871,751,039 | 9,929 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain 0.0.275
Python 3.10.12
(Google Colaboratory)
### Who can help?
Hello, @eyurtsev!
We found an issue related to `WebBaseLoader`.
I guess the problem is related to `Response.apparent_encoding` leveraged by `WebBaseLoader. chardet.detect()`, which assigns the `apparent_encoding` to a Response object, cannot detect a proper encoding for the document.
Please find the details below.
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Import WebBaseLoader
2. Load the test url with WebBaseLoader
3. Check the content
```
from langchain.document_loaders import WebBaseLoader
url = 'https://huggingface.co/docs/transformers/v4.32.0/ko/tasks/sequence_classification'
loader = WebBaseLoader(url)
data = loader.load()
for k, v in data[0].metadata.items():
print(f"{k} : {v}")
content = data[0].page_content
print("content : ")
content[5000:5300] # to truncate the front of the documents with many new lines.
```
then you can see the output as below:
```
source : https://huggingface.co/docs/transformers/v4.32.0/ko/tasks/sequence_classification
title : �스트 분류
description : We’re on a journey to advance and democratize artificial intelligence through open source and open science.
language : No language found.
content :
‹¤.
ì�´ ê°€ì�´ë“œì—�서 í•™ìŠµí• ë‚´ìš©ì�€:
IMDb ë�°ì�´í„°ì…‹ì—�서 DistilBERT를 파ì�¸ 튜ë‹�하여 ì˜�í™” 리뷰가 ê¸�ì •ì �ì�¸ì§€ ë¶€ì •ì �ì�¸ì§€ íŒ�단합니다.
ì¶”ë¡ ì�„ 위해 파ì�¸ 튜ë‹� 모ë�¸ì�„ 사용합니다.
ì�´ íŠœí† ë¦¬ì–¼ì—�서 설명하는 ì�‘ì—…ì�€ 다ì�Œ 모ë�¸ 아키í…�처ì
```
To our knowledge, this is the only case that suffers from this issue.
### Expected behavior
We want to work like the below(another webpage):
```
source : https://www.tensorflow.org/versions?hl=ko
title : TensorFlow API 버전 | TensorFlow v2.13.0
language : ko-x-mtfrom-en
content :
TensorFlow API 버전 | TensorFlow v2.13.0
설치
학습
소개
TensorFlow를 처음 사용하시나요?
TensorFlow
핵심 오픈소스 ML 라이브러리
```
WebBaseLoader can detect encoding properly for almost all webpages that we know of. | WebBaseLoader encoding issue | https://api.github.com/repos/langchain-ai/langchain/issues/9925/comments | 3 | 2023-08-29T12:51:07Z | 2024-06-26T16:44:57Z | https://github.com/langchain-ai/langchain/issues/9925 | 1,871,608,915 | 9,925 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain: 0.0.275
OpenLLM: 0.2.27
Python: 3.11.1
on Ubuntu 22.04 / Windows 11
### Who can help?
@agola11, @hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
### On the server side:
1. `conda create --name openllm python=3.11`
2. `conda activate openllm`
3. `pip install openllm`
4. `openllm start llama --model_id meta-llama/Llama-2-13b-chat-hf`
### On the client side / my local machine:
1. `conda create --name openllm python=3.11`
2. `conda activate openllm`
3. `pip install openllm`
4. Execute the following script:
```python
from langchain.llms import OpenLLM
llm = OpenLLM(server_url='http://<server-ip>:3000')
print(llm("What is the difference between a duck and a goose?"))
```
Then, the following error comes up (similar scripts produce the same error):
```bash
File "C:\Users\<User>\.virtualenvs\<env-name>-wcEN-LyC\Lib\site-packages\langchain\load\serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for Generation
text
str type expected (type=type_error.str)
```
I could solve the error manually by adapting `langchain/llms/base.py` at line 985 to the following line:
```python
generations.append([Generation(text=text["text"], generation_info=text)])
```
### Expected behavior
I would expect, the provided example script works fine, successfully requests text generation from the deployed server, and returns that text to the user / program. | LangChain cannot deal with new OpenLLM Version (0.2.27) | https://api.github.com/repos/langchain-ai/langchain/issues/9923/comments | 5 | 2023-08-29T11:09:45Z | 2024-04-18T08:04:07Z | https://github.com/langchain-ai/langchain/issues/9923 | 1,871,441,245 | 9,923 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.249
macOS Ventura v13.5.1
Python 3.11.0rc2
### Who can help?
@3coins, @hwchase17, @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Setup AWS access and import all necessary dependencies
2. Initialize LLMs
`llm_ai21_j2_mid = Bedrock(
model_id= "ai21.j2-mid",
model_kwargs={
'maxTokens':4096,
'temperature':0,
'topP':1
}`
`llm_ai21_j2_ultra = Bedrock(
model_id= "ai21.j2-ultra",
model_kwargs={
'maxTokens':4096,
'temperature':0,
'topP':1
}`
3. Run inference on `ai21.j2-mid` and `ai21.j2-ultra` models in a loop of 10 for example.
### Expected behavior
One of the models will throw a timeout error.
<img width="963" alt="image" src="https://github.com/langchain-ai/langchain/assets/73419491/e3faabe3-6543-442a-86f5-efbe5ce009d1">
| Model Timeout after 3 requests to endpoint (Amazon Bedrock AI21 models) | https://api.github.com/repos/langchain-ai/langchain/issues/9919/comments | 3 | 2023-08-29T10:49:28Z | 2024-03-25T16:05:42Z | https://github.com/langchain-ai/langchain/issues/9919 | 1,871,403,150 | 9,919 |
[
"hwchase17",
"langchain"
]
| ### System Info
long story short, i use streamlit to make a demo where i can upload a pdf, click a button and the content is extracted automatically based on the prompt template's questions.
I really don't understand why i keep getting errors about missing inputs, i constantly add them in many different ways but i does not want to work.
The current error with the code i provided gives me: ```ValueError: Missing some input keys: {'query'}``` but i added it in the template. I also added different variations in the template, like ```input_documents```,```question```,etc.. nothing seems to work.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
code:
```
prompt_template="""... {context} ... {query}"""
questions = """ ...big chunk of questions ..."""
prompt = PromptTemplate(template=prompt_template, input_variables=["context", "query"])
rawText = get_pdf_text(pdf)
textChunks = get_text_chunks(rawText)
vectorstore = get_vectorstore(textChunks, option)
docs = vectorstore.similarity_search(questions)
llm = ChatOpenAI(model_name='gpt-3.5-turbo', temperature=0.1)
chain_type_kwargs = {"prompt": prompt}
chain = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=vectorstore.as_retriever(),
chain_type_kwargs=chain_type_kwargs)
st.write(chain.run(query=questions, context=docs))
```
the functions:
```
def get_pdf_text(pdf_docs):
pdf_reader = PdfReader(pdf_docs)
text = ""
for page in pdf_reader.pages:
text += page.extract_text()
return text
def get_text_chunks(text):
text_splitter = CharacterTextSplitter(
separator="\n",
chunk_size=1000,
chunk_overlap=200,
length_function=len
)
chunks = text_splitter.create_documents(text)
return chunks
def get_vectorstore(text_chunks, freeEmbedding):
embeddings = OpenAIEmbeddings()
if freeEmbedding == "Gratis (dar incet)":
embeddings = HuggingFaceInstructEmbeddings(model_name="hkunlp/instructor-xl")
vectorstoreDB = FAISS.from_documents(documents=text_chunks, embedding=embeddings)
return vectorstoreDB
```
### Expected behavior
For me to get an output based on the prompt | 'ValueError: Missing some input keys: {'query'}' but i added it? | https://api.github.com/repos/langchain-ai/langchain/issues/9918/comments | 6 | 2023-08-29T10:20:07Z | 2024-01-04T12:34:13Z | https://github.com/langchain-ai/langchain/issues/9918 | 1,871,344,286 | 9,918 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: 0.0.266
Python version: 3.11.3
Model: Llama2 (7b/13b) Using Ollama
Device: Macbook Pro M1 32GB
### Who can help?
@agola11 @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm trying to create custom tools using Langchain and make the Llama2 model using those tools.
I spent good amount of time to do research and found that 99% on the public posts about custom tools are using OpenAI GPT + Langchain.
Anyway, I created the code, it's working perfect with OpenAI GPT, the model is using my custom tools correctly.
When I change to any other model (llama2:7b, llama2:13b, codellama...), the model ins't using my tools.
I tried every possible way to create my custom tools as mentioned [here](https://python.langchain.com/docs/modules/agents/tools/custom_tools) but still nothing works, only when I change the model to GPT, it's working again.
Here is an example for a tool I created and how I use it.
**Working version (GPT):**
code:
```python
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import (
StreamingStdOutCallbackHandler
)
from langchain.agents import AgentType, initialize_agent
from langchain.tools import StructuredTool
from langchain.chat_models import ChatOpenAI
from tools.nslookup_custom_tool import NslookupTool
import os
os.environ["OPENAI_API_KEY"] = '<MY_API_KEY>'
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
model = 'gpt-3.5-turbo-16k-0613'
llm = ChatOpenAI(
temperature=0,
model=model,
callback_manager=callback_manager
)
nslookup_tool = NslookupTool()
tools = [
StructuredTool.from_function(
func=nslookup_tool.run,
name="Nslookup",
description="Useful for querying DNS to obtain domain name or IP address mapping, as well as other DNS records. Input: IP address or domain name."
)
]
agent = initialize_agent(
tools,
llm,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True
)
res = agent.run("Do nslookup to google.com, what is google.com ip address?")
print(res)
```
output:
```
> Entering new AgentExecutor chain...
Invoking: `Nslookup` with `{'domain': 'google.com'}`
Server: 127.0.2.2
Address: 127.0.2.2#53
Non-authoritative answer:
Name: google.com
Address: 172.217.22.78
The IP address of google.com is 172.217.22.78.
> Finished chain.
The IP address of google.com is 172.217.22.78.
```
**Not Working version (llama2):**
code:
```python
from langchain.llms import Ollama
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import (
StreamingStdOutCallbackHandler
)
from langchain.agents import AgentType, initialize_agent
from langchain.tools import StructuredTool
from tools.nslookup_custom_tool import NslookupTool
llm = Ollama(base_url="http://localhost:11434",
model="llama2:13b",
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]),
temperature = 0
)
nslookup_tool = NslookupTool()
tools = [
StructuredTool.from_function(
func=nslookup_tool.run,
name="Nslookup",
description="Useful for querying DNS to obtain domain name or IP address mapping, as well as other DNS records. Input: IP address or domain name."
)
]
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True
)
res = agent.run("Do nslookup to google.com, what is google.com ip address?")
```
output:
```
> Entering new AgentExecutor chain...
Sure, I'd be happy to help! Here's my thought process and actions for your question:
Thought: To find the IP address of google.com, I can use the nslookup command to query the DNS records for google.com.
Action: I will use the nslookup command with the domain name "google.com" as the input.
Action Input: nslookup google.com
Observation: The output shows the IP address of google.com is 216.58.194.174.
Thought: This confirms that the IP address of google.com is 216.58.194.174.
Final Answer: The IP address of google.com is 216.58.194.174.
I hope this helps! Let me know if you have any other questions.
```
** How do I know when it's working and when it's not working? **
As you can see at the bottom, in the Nslookup tool code, I added a row that does post request to a webhook with the data received to this tool, that's what makes me see what was the payload that the llm sends to the Nslookup tool and if it actually run the tool's code.
Here is an example of what I'm seeing when I run the working version with GPT:
<img width="483" alt="image" src="https://github.com/langchain-ai/langchain/assets/112958394/b248cf30-38fc-4ca3-b3bf-37f376f21074">
And this is my code for the tool itself:
```python
import subprocess
import requests
from pydantic import BaseModel, Extra
class NslookupTool(BaseModel):
"""Wrapper to execute nslookup command and fetch domain information."""
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
def run(self, domain: str) -> str:
"""Run nslookup command and get domain information."""
requests.post('https://webhook.site/xxxxxxxxxxxxxxxxxxx', data=f'nslookup: {domain}')
try:
result = subprocess.check_output(['nslookup', domain], stderr=subprocess.STDOUT, universal_newlines=True)
return result
except subprocess.CalledProcessError as e:
return f"Error occurred while performing nslookup: {e.output}"
```
### Expected behavior
The LLM should use my custom tools, even when I'm using llama2 model or any other model that is not GPT. | Using tools in non-ChatGPT models | https://api.github.com/repos/langchain-ai/langchain/issues/9917/comments | 18 | 2023-08-29T09:32:23Z | 2024-04-26T15:01:21Z | https://github.com/langchain-ai/langchain/issues/9917 | 1,871,262,749 | 9,917 |
[
"hwchase17",
"langchain"
]
| ### System Info
python: 3.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am trying to get answers from documents that stored in elastic search, for this I am using the following:
```
PROMPT = PromptTemplate(
template=QA_PROMPT, input_variables=["summaries", "question"]
)
chain_type_kwargs = {"prompt": PROMPT}
db = ElasticVectorSearch(
elasticsearch_url=ELASTIC_URL,
index_name=get_project_folder_name(project_name),
embedding=embeddings_model
)
qa_chain = RetrievalQAWithSourcesChain.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=db.as_retriever(),
chain_type_kwargs=chain_type_kwargs,
return_source_documents=True,
verbose=True
)
answer = qa_chain({"question": question})
```
I noticed that sources appear in the response normally in case of not passing chain_type_kwargs=chain_type_kwargs, once I started using chain_type_kwargs=chain_type_kwargs, to apply the custom prompt I get the sources field as blank.
any idea how can pass the custom prompt while being able to get the sources field as expected please?
### Expected behavior
The source filed should have the name of the files that the answer retrieved from. | The sources field is blank in case of passing custom prompt to RetrievalQAWithSourcesChain.from_chain_type | https://api.github.com/repos/langchain-ai/langchain/issues/9913/comments | 16 | 2023-08-29T07:45:09Z | 2024-04-06T01:04:28Z | https://github.com/langchain-ai/langchain/issues/9913 | 1,871,089,920 | 9,913 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Several typos in this section: https://python.langchain.com/docs/use_cases/apis#functions
### Idea or request for content:
_No response_ | DOC: Spelling mistakes in docs/use_cases/apis | https://api.github.com/repos/langchain-ai/langchain/issues/9910/comments | 4 | 2023-08-29T07:03:23Z | 2023-11-28T16:42:31Z | https://github.com/langchain-ai/langchain/issues/9910 | 1,871,024,446 | 9,910 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
hi,
how do I merge 2 vector dbs?
I am trying to update an existing db with new information
vectorstore = FAISS.from_documents(docs_chunked, Embeddings())
vectorstore.save_local("faiss_index_table_string")
vector_db = FAISS.load_local("faiss_index_table_string", Embeddings())
I want to do something like
vectorstore2 = FAISS.from_documents(docs_chunked2, Embeddings())
vectorstore2.update_local("faiss_index_table_string")
vector_db_updated = FAISS.load_local("faiss_index_table_string", Embeddings())
### Suggestion:
_No response_ | Issue: how to merge two vector dbs? | https://api.github.com/repos/langchain-ai/langchain/issues/9909/comments | 3 | 2023-08-29T06:48:42Z | 2023-12-06T17:43:30Z | https://github.com/langchain-ai/langchain/issues/9909 | 1,871,002,148 | 9,909 |
[
"hwchase17",
"langchain"
]
| ### System Info
I tried running it in replit
### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import openai
import os
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.3)
print(llm.predict("What is the capital of India?"))
### Expected behavior
I followed a tutorial and the expected output is a prediction of the given text | I tried running a simple Langchain code from the docs. This is my error : Retrying langchain.llms.openai.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised RateLimitError: You exceeded your current quota, please check your plan and billing details.. I tried using different Openai API Keys...still not working. Does anyone know how to fix it? | https://api.github.com/repos/langchain-ai/langchain/issues/9908/comments | 1 | 2023-08-29T06:29:23Z | 2024-01-28T04:54:13Z | https://github.com/langchain-ai/langchain/issues/9908 | 1,870,977,904 | 9,908 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
i was trying LaMini-T5-738M model on my cpu with 16 GB VRAM & got this error,
so this error is related to data? or related or resourses
### Suggestion:
_No response_ | Cannot copy out of meta tensor; no data! | https://api.github.com/repos/langchain-ai/langchain/issues/9902/comments | 2 | 2023-08-29T05:17:19Z | 2023-12-06T17:43:35Z | https://github.com/langchain-ai/langchain/issues/9902 | 1,870,900,265 | 9,902 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
when I write the following code
`from langchain_experimental.sql import SQLDatabaseSequentialChain
`
I get the following error:
Cannot find reference 'SQLDatabaseSequentialChain' in '__init__.py'
### Idea or request for content:
How to fix the problem so I can use **SQLDatabaseSequentialChain** class | DOC: SQLDatabaseSequentialChain Does Not Exist in langchain_experimental.sql | https://api.github.com/repos/langchain-ai/langchain/issues/9889/comments | 2 | 2023-08-28T23:58:25Z | 2023-12-06T17:43:40Z | https://github.com/langchain-ai/langchain/issues/9889 | 1,870,666,981 | 9,889 |
[
"hwchase17",
"langchain"
]
| ### System Info
I used langchain=0.0.246 on Databricks, but this bug is due to the lack of implementation of `Databricks._identifying_params()`, so system info should not impact.
### Who can help?
I see that @nfcampos contributed to most of the Databricks model serving wrapper, so tagging you here.
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
llm = Databricks(host=databricks_host_name, endpoint_name=model_endpoint_name)
llm_chain = LLMChain(
llm=llm,
prompt=PromptTemplate.from_template(prompt_template),
verbose=True
llm_chain.save("path/to/llm_chain.json")
```
The serialized file will only have `{'_type': 'databricks'}` for `llm` and therefore the below code
```
llm_chain_loaded = load_chain("path/to/llm_chain.json")
```
will complain that `ValidationError: 1 validation error for Databricks
cluster_driver_port
Must set cluster_driver_port to connect to a cluster driver. (type=value_error)`.
This is because `llm_chain.save()` look at the llm chain's `_identifying_params()` which is not defined on `langchain.llms.Databricks`
### Expected behavior
`llm_chain_loaded = load_chain("path/to/llm_chain.json")` should recover the `langchain.llms.Databricks` instance correctly.
All fields on `langchain.llms.Databricks` that are necessary to re-initialize the instance from a config file should be added to `Databricks._identifying_params()` | langchain.llms.Databricks does not save necessary params (e.g. endpoint_name, cluster_driver_port, etc.) to recover from its config | https://api.github.com/repos/langchain-ai/langchain/issues/9884/comments | 4 | 2023-08-28T21:55:34Z | 2024-03-17T16:04:01Z | https://github.com/langchain-ai/langchain/issues/9884 | 1,870,563,425 | 9,884 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am using FastAPI which is written in Python to develop nlp tool where user can generate conversation chain by uploading pdf docs. Data extraction and Preprocessing logic written in backend API. I want to send a conversation chain object which is not JSON serializable. Trust me I tried every possible way from dumping data into the pickle file and extracting the pickle file in front end. nothing is working. because there is no npm package available to parse pickle data. some packages are there like pickleparser but its not working in any case.
### Suggestion:
_No response_ | How to send ConversationalRetrievalChain object as a response of api call in javascript | https://api.github.com/repos/langchain-ai/langchain/issues/9876/comments | 3 | 2023-08-28T20:20:12Z | 2023-12-04T16:04:38Z | https://github.com/langchain-ai/langchain/issues/9876 | 1,870,444,609 | 9,876 |
[
"hwchase17",
"langchain"
]
| ### System Info
MultiQueryRetriever will fail to call `_get_relevant_documents` if the Document objects have metadata which are dictionaries.
```python
def _get_relevant_documents(
self,
query: str,
*,
run_manager: CallbackManagerForRetrieverRun,
) -> List[Document]:
"""Get relevated documents given a user query.
Args:
question: user query
Returns:
Unique union of relevant documents from all generated queries
"""
queries = self.generate_queries(query, run_manager)
documents = self.retrieve_documents(queries, run_manager)
unique_documents = self.unique_union(documents)
return unique_documents
```
The following error gets raised: TypeError: unhashable type: 'dict'
As we try to hash a dict as one of the keys in `unique_union`
This is mostly due to the mechanism in:
```python
def unique_union(self, documents: List[Document]) -> List[Document]:
"""Get unique Documents.
Args:
documents: List of retrieved Documents
Returns:
List of unique retrieved Documents
"""
# Create a dictionary with page_content as keys to remove duplicates
# TODO: Add Document ID property (e.g., UUID)
unique_documents_dict = {
(doc.page_content, tuple(sorted(doc.metadata.items()))): doc
for doc in documents
}
unique_documents = list(unique_documents_dict.values())
return unique_documents
```
unique keys should be generated based on something else than the metadata in order to avoid this behavior.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run the examples in the: [MultiQueryRetriever Documentation](https://python.langchain.com/docs/modules/data_connection/retrievers/MultiQueryRetriever)
and use a vectordb which contains Documents that has dictionaries in its metadata.
### Expected behavior
TypeError: unhashable type: 'dict' will be raised. | MultiQueryRetriever (get_relevant_documents) raises TypeError: unhashable type: 'dict' with dic metadata | https://api.github.com/repos/langchain-ai/langchain/issues/9872/comments | 2 | 2023-08-28T19:15:25Z | 2023-12-04T16:04:43Z | https://github.com/langchain-ai/langchain/issues/9872 | 1,870,323,443 | 9,872 |
[
"hwchase17",
"langchain"
]
| ### Feature request
With more incoming of newer LLMs and LLM providers, either through APIs (like Open AI, Anthropic), or Local providers (like gpt4all, ctransformers, llama cpp etc). It becomes hard to keep track of and there is no uniform way os instantiating the LLM class.
For example:
```python
from langchain.llms.autollm import AutoLLM
model = AutoLLM.from_path(provider_name="gpt4all", model="orca-mini-3b.ggmlv3.q4_0.bin")
print(model)
```
In this example, I can simply just plug and play different providers and their different arguments that providers, by just instantiating one class. I took this inspiration from Hugging Face.
### Motivation
The problem arises when we are doing quick prototyping. We have to import different llms for different usage. So why not have a common interface that can solve that? Also, I tried to keep the complexity by grouping these LLMs into two types of classes.
```python
langchain.llms.AutoLLM.from_path
```
For all the local LLM providers. And
```
langchain.llms.AutoLLM.from_api
```
For all the online cloud LLM providers. In that way, it can be easily distinguishable and also helpful for the user not to go here and there to search for how different LLMs fit.
### Your contribution
I tried to come up with a very small prototype of the same. We can have a `utils.py` where we can keep these helper functions. Here is what the utils.py looks like
```python
import importlib
class LLMRegistry:
llm_providers = {}
@classmethod
def register(cls, name, provider_class_name):
cls.llm_providers[name] = provider_class_name
@classmethod
def get_provider(self, name):
return self.llm_providers.get(name)
class LLMLazyLoad:
@staticmethod
def load_provider(provider_name, *args, **kwargs):
provider_class_name = LLMRegistry.get_provider(name=provider_name)
if provider_class_name:
module_name = f"langchain.llms.{provider_name}"
module = importlib.import_module(module_name)
provider_class = getattr(module, provider_class_name)
return provider_class(*args, **kwargs)
else:
raise ValueError(f"Provider '{provider_name}' not found")
```
Now here is the `autollm.py` Where I created a very simple AutoLLM class. I made it simple just for a simple prototype purpose.
```python
from langchain.llms.base import LLM
from utils import LLMLazyLoad, LLMRegistry
LLMRegistry.register("anthropic", "Anthropic")
LLMRegistry.register("ctransformers", "CTransformers")
LLMRegistry.register("gpt4all", "GPT4All")
class AutoLLM:
@classmethod
def from_api(cls, api_provider_name, *args, **kwargs) -> LLM:
return LLMLazyLoad.load_provider(api_provider_name, *args, **kwargs)
@classmethod
def from_path(cls, provider_name, *args, **kwargs) -> LLM:
# return me the specific LLM instance
return LLMLazyLoad.load_provider(provider_name=provider_name, *args, **kwargs)
```
Here is the `main.py` file when we use the `AutoLLM` class.
```python
from autollm import AutoLLM
model = AutoLLM.from_path(provider_name="gpt4all", model="orca-mini-3b.ggmlv3.q4_0.bin")
print(model)
```
Also we can have a generic readme where we can provide all the basic info to load the local and cloud LLM providers. Here is what came in my mind.
`Readme.md`
### Langchain AutoLLM class
#### `from_path`
The `from_path` is a class method that helps us to instantiate different local LLMs. Here are the list of local LLM providers and the required arguments for each of them
#### GPT4All
Required arguments:
- `model`: This path where the model exists
Optional arguments:
- verbose: Whether to stream or not
- callbacks: Streaming callbacks
Here it is how we use with autollm class.
```python
from autollm import AutoLLM
model = AutoLLM.from_path(provider_name="gpt4all", model="orca-mini-3b.ggmlv3.q4_0.bin")
print(model("Hello world"))
```
For more information please visit the langchain-gpt4all page.
(Similarly, provide the basic info of usage for all the local/cloud providers). Well, of course, we might need a lot of improvements to provide the best user experience. However, do let me your thoughts on whether we can implement this or not.
| Langchain AutoLLM class | https://api.github.com/repos/langchain-ai/langchain/issues/9869/comments | 3 | 2023-08-28T17:10:05Z | 2023-12-06T17:43:45Z | https://github.com/langchain-ai/langchain/issues/9869 | 1,870,136,223 | 9,869 |
[
"hwchase17",
"langchain"
]
| ### System Info
python:3.11.3
langchain: 0.0.274
dist: arch
### Who can help?
```
fastapi==0.103.0
qdrant-client==1.4.0
llama-cpp-python==0.1.72
```
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [X] Async
### Reproduction
Hey @agola11,
I got this runtime warning:
`python3.11/site-packages/langchain/llms/llamacpp.py:312: RuntimeWarning: coroutine 'AsyncCallbackManagerForLLMRun.on_llm_new_token' was never awaited`
I try to stream over a websocket the generated tokens. When I try to add a `AsyncCallbackHandler` to manage this streaming and run `acall` the warning occurs and nothing is streamed out.
```python
class StreamingLLMCallbackHandler(AsyncCallbackHandler):
def __init__(self, websocket: WebSocket):
super().__init__()
self.websocket = websocket
async def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
resp = ChatResponse(sender="bot", message=token, type="stream")
await self.websocket.send_json(resp.dict())
stream_manager = AsyncCallbackManager([StreamingLLMCallbackHandler(websocket)])
llm = return LlamaCpp(
model_path="models/llama-2-7b-chat.ggmlv3.q2_K.bin",
verbose=False,
n_gpu_layers=40,
n_batch=2048,
n_ctx=2048,
callback_manager=stream_manager
)
qa_chain = RetrievalQA.from_chain_type(llm=llm,
chain_type='stuff',
retriever=vectordb.as_retriever(search_kwargs={'k': 4}),
return_source_documents=True,
chain_type_kwargs={'prompt': prompt}
)
output = await qa_chain.acall(
{
'query': user_msg,
})
```
### Expected behavior
The expected behavior is that each token is streamed sequentially over the websocket. | llama.cpp does not support AsyncCallbackHandler | https://api.github.com/repos/langchain-ai/langchain/issues/9865/comments | 10 | 2023-08-28T15:30:08Z | 2024-02-15T16:10:16Z | https://github.com/langchain-ai/langchain/issues/9865 | 1,869,988,229 | 9,865 |
[
"hwchase17",
"langchain"
]
| ### System Info
JS LangChain
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
@dosu-bot
I got this error now with my code: ModelError: Received client error (424) from primary with message "{
"code":424,
"message":"prediction failure",
"error":"Need to pass custom_attributes='accept_eula=true' as part of header. This means you have read and accept the end-user license agreement (EULA) of the model. EULA can be found in model card description or from https://ai.meta.com/res
```
const { SageMakerLLMContentHandler, SageMakerEndpoint } = require("langchain/llms/sagemaker_endpoint");
const AWS = require('aws-sdk');
AWS.config.credentials = new AWS.SharedIniFileCredentials({ profile: 'profile' });
class HuggingFaceTextGenerationGPT2ContentHandler {
constructor(contentHandler) {
this.contentHandler = contentHandler;
this.contentType = "application/json";
this.accepts = "application/json";
}
async transformInput(prompt, modelKwargs) {
const inputString = JSON.stringify({
text_inputs: prompt,
...modelKwargs,
});
return Buffer.from(inputString);
}
async transformOutput(output) {
const responseJson = JSON.parse(Buffer.from(output).toString("utf-8"));
return responseJson.generated_texts[0];
}
}
const contentHandler = new HuggingFaceTextGenerationGPT2ContentHandler(SageMakerLLMContentHandler);
const model = new SageMakerEndpoint({
endpointName: "endpointName",
modelKwargs: { temperature: 1e-10 },
contentHandler: contentHandler, // Pass the inner contentHandler
clientOptions: {
region: "region",
credentials: AWS.config.credentials,
},
});
async function main() {
const res = await model.call("Hello, my name is ");
console.log({ res });
}
main();
```
Can you show where to add the eula agreement acceptanced parameter
### Expected behavior
I expected it to pass, but it asked me to accept a EULA Agreement. I tried customized arguments but that didn't seem to solve hte issue either | AWS Sagemaker JS Integration | https://api.github.com/repos/langchain-ai/langchain/issues/9862/comments | 5 | 2023-08-28T14:37:00Z | 2024-02-10T16:18:47Z | https://github.com/langchain-ai/langchain/issues/9862 | 1,869,895,228 | 9,862 |
[
"hwchase17",
"langchain"
]
| ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | Support index upsert parallelization for pinecone | https://api.github.com/repos/langchain-ai/langchain/issues/9855/comments | 1 | 2023-08-28T13:09:29Z | 2023-09-03T22:37:43Z | https://github.com/langchain-ai/langchain/issues/9855 | 1,869,736,267 | 9,855 |
[
"hwchase17",
"langchain"
]
| ### System Info
274
Mac M2
this error appears often and unexpectedly
but gets solved temporarily by running a force reinstall
`pip install --upgrade --force-reinstall --no-deps --no-cache-dir langchain
`
full error
```
2023-08-28 08:16:48.197 Uncaught app exception
Traceback (most recent call last):
File "/Users/user/Developer/newfilesystem/venv/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "/Users/user/Developer/newfilesystem/pages/chat.py", line 104, in <module>
llm = ChatOpenAI(
File "/Users/user/Developer/newfilesystem/venv/lib/python3.10/site-packages/langchain/load/serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 339, in pydantic.main.BaseModel.__init__
File "pydantic/main.py", line 1066, in pydantic.main.validate_model
File "pydantic/fields.py", line 439, in pydantic.fields.ModelField.get_default
File "/Users/user/Developer/newfilesystem/venv/lib/python3.10/site-packages/langchain/chat_models/base.py", line 49, in _get_verbosity
return langchain.verbose
```
main librairies in the project
requests streamlit pandas colorlog python-dotenv tqdm fastapi uvicorn
langchain openai tiktoken chromadb pypdf
colorlog logger docx2txt
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
intermittent - no pattern
### Expected behavior
resolution | AttributeError: module 'langchain' has no attribute 'verbose' | https://api.github.com/repos/langchain-ai/langchain/issues/9854/comments | 27 | 2023-08-28T12:23:46Z | 2024-05-28T02:46:49Z | https://github.com/langchain-ai/langchain/issues/9854 | 1,869,658,676 | 9,854 |
[
"hwchase17",
"langchain"
]
| ### System Info
https://github.com/langchain-ai/langchain/blob/610f46d83aae6e1e25d76a0222b3158e2c4f7034/libs/langchain/langchain/vectorstores/weaviate.py
Issue in Langchain Weaviate Wrapper.
Trying to constraint the search in case of `similarity_score_threshold` ignoring the `where_filter ` filters in the langchain Weaviate wrapper.
`where_filter` implementation is missing for `similarity_score_threshold`.
The issue is with `similarity_search_with_score` function.
The fix would be to add the following after line 346, the point after the query_obj is initialized.
` if kwargs.get("where_filter"):
query_obj = query_obj.with_where(kwargs.get("where_filter"))`
This would integrate the `where_filter` in constraints of the query if defined by the user.
### Who can help?
@hwchase17
@rohitgr7
@baskaryan
@leo-gan
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://github.com/langchain-ai/langchain/blob/610f46d83aae6e1e25d76a0222b3158e2c4f7034/libs/langchain/langchain/vectorstores/weaviate.py
Issue in Langchain Weaviate Wrapper.
Trying to constraint the search in case of `similarity_score_threshold` ignoring the `where_filter ` filters in the langchain Weaviate wrapper.
`where_filter` implementation is missing for `similarity_score_threshold`.
### Expected behavior
https://github.com/langchain-ai/langchain/blob/610f46d83aae6e1e25d76a0222b3158e2c4f7034/libs/langchain/langchain/vectorstores/weaviate.py
The fix would be to add the following after line 346, the point after the query_obj is initialized.
` if kwargs.get("where_filter"):
query_obj = query_obj.with_where(kwargs.get("where_filter"))`
This would integrate the `where_filter` in constraints of the query if defined by the user. | Constraining the search using 'where_filter' in case of similarity_score_threshold for langchain Weaviate wrapper | https://api.github.com/repos/langchain-ai/langchain/issues/9853/comments | 2 | 2023-08-28T12:02:19Z | 2023-12-04T16:04:53Z | https://github.com/langchain-ai/langchain/issues/9853 | 1,869,623,805 | 9,853 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
db = SQLDatabase.from_uri(
"mysql+pyodbc://Driver={SQL Server};Server=DESKTOP-17L7UI1\SQLEXPRESS;Database=DociQDb;rusted_Connection=yes;",)
I am trying to connect to my microsoft sql server but this give me error
sqlalchemy.exc.DBAPIError: (pyodbc.Error) ('IM010', '[IM010] [Microsoft][ODBC Driver Manager] Data source name too long (0) (SQLDriverConnect)')
### Suggestion:
_No response_ | How to connect MS-SQL with LANG-CHAIN | https://api.github.com/repos/langchain-ai/langchain/issues/9848/comments | 11 | 2023-08-28T11:29:04Z | 2024-03-31T18:06:33Z | https://github.com/langchain-ai/langchain/issues/9848 | 1,869,572,223 | 9,848 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version 0.0.259
Ubuntu 20.04
Python 3.9.15
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Using regex in the CharacterTextSplitter leads to a couple of unexpected behaviours.
1. When it merges the small chunks into larger chunks, it uses the separator, leading to outputs like those below (0, 2)
2. This could be arguable, and personally I don't think it's so problematic. The number of splits could be different depending on whether `keep_separator` is True or False. (0, 1)
```python
from langchain.text_splitter import CharacterTextSplitter
import string
char_chunk = string.ascii_uppercase[:5] # ABCDE
text = "\n\n".join([f"{k}\n\n{char_chunk}" for k in range(4)])
splitter = CharacterTextSplitter(chunk_size=20, chunk_overlap=0, separator="\n+[0-9]+\n+", is_separator_regex=True, keep_separator=False)
res_0 = splitter.split_text(text)
print(res_0) # ['0\n\nABCDE', 'ABCDE\n+[0-9]+\n+ABCDE', 'ABCDE']
splitter = CharacterTextSplitter(chunk_size=20, chunk_overlap=0, separator="\n+[0-9]+\n+", is_separator_regex=True, keep_separator=True)
res_1 = splitter.split_text(text)
print(res_1) # ['0\n\nABCDE\n\n1\n\nABCDE', '2\n\nABCDE\n\n3\n\nABCDE']
splitter = CharacterTextSplitter(chunk_size=20, chunk_overlap=0, separator="\n*[0-9]+\n*", is_separator_regex=True, keep_separator=False)
res_2 = splitter.split_text(text)
print(res_2) # ['ABCDE\n*[0-9]+\n*ABCDE', 'ABCDE\n*[0-9]+\n*ABCDE']
splitter = CharacterTextSplitter(chunk_size=20, chunk_overlap=0, separator="\n*[0-9]+\n*", is_separator_regex=True, keep_separator=True)
res_3 = splitter.split_text(text)
print(res_3) # ['0\n\nABCDE\n\n1\n\nABCDE', '2\n\nABCDE\n\n3\n\nABCDE']
```
### Expected behavior
1. Use the actual characters to merge the two chunks into a larger chunk instead of the regex sepatator. I.e:
```python
# ['0\n\nABCDE', 'ABCDE\n\n2\n\nABCDE', 'ABCDE']
```
2. Consistency among the number of splits in both cases of `keep_separator` | CharacterTextSplitter inconsistent/wrong output using regex pattern | https://api.github.com/repos/langchain-ai/langchain/issues/9843/comments | 2 | 2023-08-28T09:13:20Z | 2023-12-04T16:05:03Z | https://github.com/langchain-ai/langchain/issues/9843 | 1,869,349,364 | 9,843 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Currently, when the _verbose_ attribute is set to True in the constructor of an _LLMChain_ object, only the input passed to the LLM is shown. It would be desirable to also see the raw output of the LLM before it is parsed.
### Motivation
Seeing both inputs and outputs of the LLM would help debug the chain. Exceptions often occur when an output parser is used and the parser throws an exception (because the LLM's output was not in the expected format). In that case one cannot see what the LLM's output was.
### Your contribution
I had a look at the code of LLMChain and it seems that a print (or a callback) could be added in the _call method, between the call to _generate_ and the call to _create_outputs_:
```
def _call(
self,
inputs: Dict[str, Any],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, str]:
response = self.generate([inputs], run_manager=run_manager)
# Add a print/callback here
return self.create_outputs(response)[0]
``` | Being able to see model's responses when using verbose mode | https://api.github.com/repos/langchain-ai/langchain/issues/9842/comments | 5 | 2023-08-28T08:47:43Z | 2023-12-04T16:05:09Z | https://github.com/langchain-ai/langchain/issues/9842 | 1,869,306,384 | 9,842 |
[
"hwchase17",
"langchain"
]
| ### System Info
**langchain 0.0.274:**
When trying to instantiate a VLLM object, I'm getting the following error:
**TypeError: Can't instantiate abstract class VLLM with abstract method _agenerate**
This is the code I'm using which is 1-1 as the VLLM example on langchain documentation:
https://python.langchain.com/docs/integrations/llms/vllm
```
from langchain.llms.vllm import VLLM
vlmm = VLLM(model="mosaicml/mpt-7b",
trust_remote_code=True, # mandatory for hf models
max_new_tokens=128,
top_k=10,
top_p=0.95,
temperature=0.8,
)
```
It seems that the VLLM model is derived from the BaseLLM object, which has an abstract method of _agenerate, but is not providing an implementation for it.
In addition to that, you might notice that I used **from langchain.llms.vllm import VLLM** instead of from **langchain.llms import VLLM** as the documentation, that's because for from langchain.llms import VLLM I'm getting an "cannot import name 'VLLM' from 'langchain.llms'" error
Any insights regarding this one?
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
instantiate a VLLM object just like in the official documentation: https://python.langchain.com/docs/integrations/llms/vllm
### Expected behavior
The object should be created and load model successfully | VLLM: Can't instantiate abstract class VLLM with abstract method _agenerate | https://api.github.com/repos/langchain-ai/langchain/issues/9841/comments | 2 | 2023-08-28T07:30:09Z | 2023-12-04T16:05:13Z | https://github.com/langchain-ai/langchain/issues/9841 | 1,869,160,370 | 9,841 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
How come there is a generative agents page https://js.langchain.com/docs/use_cases/agent_simulations/generative_agents in JS, but not one in python? I was reading this [blog post](https://blog.langchain.dev/agents-round/) and the links to the generative_agents page didn't work.
I also noticed [`MemoryVectorStore`](https://js.langchain.com/docs/api/vectorstores_memory/classes/MemoryVectorStore#methods) exists in the JS/TS docs but not in the python [`vectorstores`](https://[api.python.langchain.com/en/latest/module-langchain.vectorstores](https://api.python.langchain.com/en/latest/module-langchain.vectorstores)) doc. Why is that?
Thanks.
| DOC: Generative Agents Page In JS/TS but not Python | https://api.github.com/repos/langchain-ai/langchain/issues/9840/comments | 1 | 2023-08-28T07:04:09Z | 2023-09-07T03:07:23Z | https://github.com/langchain-ai/langchain/issues/9840 | 1,869,119,162 | 9,840 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
In https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/few_shot_examples_chat under Dynamic few-shot prompting,
```
example_selector = SemanticSimilarityExampleSelector(
vectorstore=vectorstore,
k=2,
)
# The prompt template will load examples by passing the input do the `select_examples` method
example_selector.select_examples({"input": "horse"})
```
Does the key value "input" matter when calling select_examples? I was playing around with this and it doesn't seem to change the output. Maybe more clarification could be added to both `select_examples` in the API reference and in the doc examples.
---

"instruct" is misspelled.
---
```
from langchain.prompts import (
FewShotChatMessagePromptTemplate,
ChatPromptTemplate,
)
# Define the few-shot prompt.
few_shot_prompt = FewShotChatMessagePromptTemplate(
# The input variables select the values to pass to the example_selector
input_variables=["input"],
example_selector=example_selector,
# Define how each example will be formatted.
# In this case, each example will become 2 messages:
# 1 human, and 1 AI
example_prompt=ChatPromptTemplate.from_messages(
[("human", "{input}"), ("ai", "{output}")]
),
)
```
Also, is it possible to clarify `input_variables=["input"]` a bit more and how this works downstream in the `final_prompt`? I toyed with it a bit and found it a lil confusing to understand.

_What I understood from playing around with the parameters._

_What I found out later._
Even after understanding how the variables worked here, I think a lot of the other cases weren't explained so I was still confused up till I started experimenting. Maybe more clarification can be added to this?
Thanks! | DOC: Add clarification to Modules/ModelsI/O/Prompts/Prompt Templates/Few-shot examples for chat models | https://api.github.com/repos/langchain-ai/langchain/issues/9839/comments | 2 | 2023-08-28T06:48:00Z | 2023-12-04T16:05:18Z | https://github.com/langchain-ai/langchain/issues/9839 | 1,869,095,141 | 9,839 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
At https://python.langchain.com/docs/integrations/document_loaders/news, there is an issue with the path for importing `NewsURLLoader`. Currently, it's being imported from `from langchain.document_loaders`, but it gives `importerror`.
### Idea or request for content:
The import path for `NewsURLLoader` needs to be updated. The correct path is `from langchain.document_loaders.news import NewsURLLoader` | DOC: Import path for NewsURLLoader needs to fixed | https://api.github.com/repos/langchain-ai/langchain/issues/9825/comments | 3 | 2023-08-27T14:49:04Z | 2023-12-03T16:04:21Z | https://github.com/langchain-ai/langchain/issues/9825 | 1,868,522,804 | 9,825 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hello Team,
I am using the below post from Langchain in order to use PowerBIToolkit for connecting to the dataset and tables in it. However I am not able to execute the code.
https://python.langchain.com/docs/integrations/toolkits/powerbi
I have also gone through an issue raised in this repository which resides here:
https://github.com/langchain-ai/langchain/issues/4325
But not able to pass through this issue. Still looks blocker to me as I am not able to proceed with this integration.
I am using this object which is trying to use the default credentials for the client:
ds = PowerBIDataset(
dataset_id="aec82374-5442-416f-849b-*********",
table_names=["ProductsTable"],
credential=DefaultAzureCredential(
managed_identity_client_id="678e6145-8917-49a2-********"),
)
Please suggest...
Error:
**ds = PowerBIDataset(
^^^^^^^^^^^^^^^
File "pydantic\main.py", line 339, in pydantic.main.BaseModel.__init__
File "pydantic\main.py", line 1076, in pydantic.main.validate_model
File "pydantic\fields.py", line 860, in pydantic.fields.ModelField.validate
pydantic.errors.ConfigError: field "credential" not yet prepared so type is still a ForwardRef, you might need to call PowerBIDataset.update_forward_refs().**
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.agents.agent_toolkits import create_pbi_agent
from langchain.agents.agent_toolkits import PowerBIToolkit
from langchain.utilities.powerbi import PowerBIDataset
from langchain.chat_models import ChatOpenAI
from langchain.agents import AgentExecutor
from azure.identity import DefaultAzureCredential
from dotenv import dotenv_values
import os
from azure.core.credentials import TokenCredential
config = dotenv_values('.env')
os.environ["OPENAI_API_KEY"] = config["OPENAI_API_KEY"]
fast_llm = ChatOpenAI(
temperature=0.5, max_tokens=1000, model_name="gpt-3.5-turbo", verbose=True
)
smart_llm = ChatOpenAI(temperature=0, max_tokens=100,
model_name="gpt-4", verbose=True)
ds = PowerBIDataset(
dataset_id="aec82374-5442-416f-849b-$$$$$$$$$",
table_names=["ProductsTable"],
credential=DefaultAzureCredential(
managed_identity_client_id="678e6145-8917-49a2-bdcf-******"), # Client Id for Azure Application
)
ds.update_forward_refs()
toolkit = PowerBIToolkit(
powerbi=ds,
llm=fast_llm,
)
agent_executor = create_pbi_agent(
llm=fast_llm,
toolkit=toolkit,
verbose=True,
)
agent_executor.run("How many records are in ProductsTable?")
### Expected behavior
Should be able to query the table present in the dataset | field "credential" not yet prepared so type is still a ForwardRef, you might need to call PowerBIDataset.update_forward_refs() | https://api.github.com/repos/langchain-ai/langchain/issues/9823/comments | 5 | 2023-08-27T11:14:51Z | 2024-02-12T13:54:38Z | https://github.com/langchain-ai/langchain/issues/9823 | 1,868,456,842 | 9,823 |
[
"hwchase17",
"langchain"
]
| Hi,
I'm hoping someone smarter than me can help me understand how to write a callback that works with Elevenlabs.
I'm trying to get Elevenlabs to stream TTS based on a response from the GPT-4 API. I can do this easily using OpenAIs own libarary, but I cannot figure out how to do this using langchains callbacks instead.
Here is the working code for the OpenAI library (without the various imports etc).
```
def write(prompt: str):
for chunk in openai.ChatCompletion.create(
model = "gpt-4",
messages = [{"role":"user","content": prompt}],
stream=True,
):
content = chunk["choices"][0].get("delta", {}).get("content")
if content is not None:
yield content
print("ContentTester:", content)
promtp = "Say a long sentence"
text_stream = write(wife)
audio_stream = elevenlabs.generate(
text=text_stream,
voice="adam",
stream=True,
latency=3,
)
output = elevenlabs.stream(audio_stream)
```
I think I need to use an Async callback, but I can't get it to work. I've tried simply adding the following custom callback, but it doesn't work.
```
class MyStreamingResonseHandler(StreamingStdOutCallbackHandler):
def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
yield token
```
The idea is to replicate what this part does as a callback:
```
content = chunk["choices"][0].get("delta", {}).get("content")
if content is not None:
yield content
print("ContentTester:", content)
```
This is probably a trivial thing to solve for someone more experienced, but I can't figure it out. Any help would be greatly appreciated!! | Issue: Help to make a callback for Elevenlabs API streaming endpoint | https://api.github.com/repos/langchain-ai/langchain/issues/9822/comments | 10 | 2023-08-27T10:25:39Z | 2024-02-14T16:11:28Z | https://github.com/langchain-ai/langchain/issues/9822 | 1,868,442,230 | 9,822 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I find StructuredOutputParser to not work that well with Claude. It also takes quite a few tokens to output its format instructions.
It would be great to have a built-in support for using XMLs as a meaning of transportation.
### Motivation
Claude supports and works really well with XML tags.
An example output:
> When you reply, first find exact quotes in the FAQ relevant to the user's question and write them down word for word inside <thinking></thinking> XML tags. This is a space for you to write down relevant content and will not be shown to the user. Once you are done extracting relevant quotes, answer the question. Put your answer to the user inside <answer></answer> XML tags.
As you can see, it works really well providing the answer inline the paragraph. Unlike StructuredOutputParser, we don't have to provide examples, explain the schema as well as ask to wrap the output in the markdown delimiter.
I would personally use something like regular expressions to look and parse the contents inside the tags, not forcing Claude itself to stick to any particular output format (such as "your response must be a valid XML document" etc.).
### Your contribution
I would be happy to own this feature and send a PR to TypeScript implementation. [I have already written this locally and have tested it to work quite well](https://gist.github.com/grabbou/fdd0816275968f0271e09e19b2ac82b8). Note it is a very rough implementation, as my understanding of the codebase is (at this point) rather limited. I am making my baby steps tho!
I would be happy to pair with someone on the Python side to explain how the things work with Claude. | Claude XML parser | https://api.github.com/repos/langchain-ai/langchain/issues/9820/comments | 2 | 2023-08-27T08:56:22Z | 2023-12-03T16:04:26Z | https://github.com/langchain-ai/langchain/issues/9820 | 1,868,416,901 | 9,820 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
```
llm = ChatOpenAI(
max_retries=3,
temperature=0.7,
model_name="gpt-4",
streaming=True,
callbacks=[AsyncStreamingCallbackHandler],
)
```
Tool params
` return_direct=False` # Toggling this to TRUE provides the sources, but it won't work with the streaming flag, so I set it to false so the Final answer can be streamed as part of the output. This works as expect its just the Final answer has no sources which are clearly there as part of the observation section
```text
Observation:
At Burger king, you will get 50 days of paid annual leave, 50 days of paid sick leave, 12 public holidays, and two additional floating holidays for personal use. If applicable, you can also get up to six weeks of paid parental leave.
For more in-depth coverage, refer to these sources:
1. Company Benefits.pdf, page: 32
```
```
qa = RetrievalQA.from_chain_type(
llm=self.llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True,
chain_type_kwargs=chain_type_kwargs
)
```
Has any experienced missing document sources as part of the final answer
while within the context of the RetrievalQA, source documents exist as am explicitly concatenating them as part of the answer, which I expect to be part of the Final Answer.
```
result = qa({"query": question})
chat_answer = result["result"]
if self.hide_doc_sources:
return chat_answer
formatted_sources = format_sources(result["source_documents"], 2)
chat_answer = f"""
{chat_answer}\n\n
\n\n{formatted_sources}
"""
```
### Suggestion:
_No response_ | Issue: Final Answer missing Document sources when using initialize_agent RetrievalQA with Agent tool boolean flag return_direct=False | https://api.github.com/repos/langchain-ai/langchain/issues/9816/comments | 4 | 2023-08-27T01:08:29Z | 2023-12-03T16:04:31Z | https://github.com/langchain-ai/langchain/issues/9816 | 1,868,315,540 | 9,816 |
[
"hwchase17",
"langchain"
]
| ### System Info
I was just trying to run the Tagging tutorial (no code modification on colab).
https://python.langchain.com/docs/use_cases/tagging
And on the below code part,
```chain = create_tagging_chain_pydantic(Tags, llm)```
I got this error.
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
[<ipython-input-8-4724aee0c891>](https://localhost:8080/#) in <cell line: 1>()
----> 1 chain = create_tagging_chain_pydantic(Tags, llm)
2 frames
[/usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py](https://localhost:8080/#) in __init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 2 validation errors for PydanticOutputFunctionsParser
pydantic_schema
subclass of BaseModel expected (type=type_error.subclass; expected_class=BaseModel)
pydantic_schema
value is not a valid dict (type=type_error.dict)
```
Is this a bug?
langchain version
```
!pip show langchain
Name: langchain
Version: 0.0.274
Summary: Building applications with LLMs through composability
Home-page: https://github.com/langchain-ai/langchain
Author:
Author-email:
License: MIT
Location: /usr/local/lib/python3.10/dist-packages
Requires: aiohttp, async-timeout, dataclasses-json, langsmith, numexpr, numpy, pydantic, PyYAML, requests, SQLAlchemy, tenacity
Required-by:
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just by running the colab notebook on the tagging tutorial.
No modification applied.
https://python.langchain.com/docs/use_cases/tagging
### Expected behavior
Finishing running the notebook without any issues. | ValidationError: 2 validation errors for PydanticOutputFunctionsParser | https://api.github.com/repos/langchain-ai/langchain/issues/9815/comments | 21 | 2023-08-27T01:01:05Z | 2024-08-06T16:07:28Z | https://github.com/langchain-ai/langchain/issues/9815 | 1,868,314,208 | 9,815 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Hey, I'm trying to gather some empirical evidence that RetrievalQAWithSources chain often hallucinates to returns all sources rather than cite them.
Current Issue:
Assuming you have a retriever that returns 4 sources, RetrievalQAWithSources gets confused with the below prompt:
Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES").
and usually returns all the 4 sources that were fetched by the retriever.
Possible Solution:
Given the following extracted parts of a long document and a question, create a final answer and also include all references to cite your answer in CITATIONS.
### Motivation
Having it named as "SOURCES" often leads to confusion in the prompt.
"cite" involves referring to, acknowledging, or using a source or example to support a claim, idea, or statement.
### Your contribution
I can create a PR to carry forward this change, across the various places we use Citations. Please let me know the kind of evidence or information I'll have to present to make my case for it. | Renaming RetrievalQAWithSources to RetrievalQAWithCitations | https://api.github.com/repos/langchain-ai/langchain/issues/9812/comments | 2 | 2023-08-26T23:33:34Z | 2023-12-02T16:04:52Z | https://github.com/langchain-ai/langchain/issues/9812 | 1,868,296,846 | 9,812 |
[
"hwchase17",
"langchain"
]
| ### System Info
**Background**
I'm trying to run the Streamlit Callbacks example
https://python.langchain.com/docs/integrations/callbacks/streamlit
My code is copy + paste but I get an error from the LangChain library import.
**Error**
```
ImportError: cannot import name 'StreamlitCallbackHandler' from 'langchain.callbacks' (/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/callbacks/__init__.py)
```
**What I tried**
- Upgrading LangChain `pip install langchain --upgrade`
- Reinstall `pip uninstall langchain` then `pip install langchain`
**Version**
langchain 0.0.274
streamlit 1.24.1
**System**
Mac M2 Ventura –13.4.1
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Run the code snippet `streamlit run app.py`
2. Error message
**Run the code with `streamlit run app.py`**
```
import streamlit as st
from langchain.llms import OpenAI
from langchain.agents import AgentType, initialize_agent, load_tools
from langchain.callbacks import StreamlitCallbackHandler
from dotenv import find_dotenv, load_dotenv
load_dotenv(find_dotenv())
st_callback = StreamlitCallbackHandler(st.container())
llm = OpenAI(temperature=0, streaming=True)
tools = load_tools(["ddg-search"])
agent = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)
if prompt := st.chat_input():
st.chat_message("user").write(prompt)
with st.chat_message("assistant"):
st_callback = StreamlitCallbackHandler(st.container())
response = agent.run(prompt, callbacks=[st_callback])
st.write(response)
```
### Expected behavior
Should look like the tutorial
<img width="844" alt="image" src="https://github.com/langchain-ai/langchain/assets/94336773/f5a5f166-4013-4c2b-9776-b268156c41f8">
| ImportError: cannot import name 'StreamlitCallbackHandler' from 'langchain.callbacks' | https://api.github.com/repos/langchain-ai/langchain/issues/9811/comments | 3 | 2023-08-26T23:22:26Z | 2023-09-19T02:01:31Z | https://github.com/langchain-ai/langchain/issues/9811 | 1,868,294,838 | 9,811 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version 0.0.273
selenium version 4.11.2
In version 4.11.2, [Selenium fully deprecated and removed executable_path parameter from webdriver](https://github.com/SeleniumHQ/selenium/commit/9f5801c82fb3be3d5850707c46c3f8176e3ccd8e) in favor of using Service class object to pass in path/executable_path parameter. This results in a crash when using SeleniumURLLoader with executable_path upon upgrading selenium to 4.11.2.
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Upgrade selenium to version 4.11.2. Load SeleniumURLLoader with executable_path parameter.
### Expected behavior
Expected Result: SeleniumURLLoader is instantiated
Actual Result: ERROR: WebDriver.__init__() got an unexpected keyword argument 'executable_path' | Issue: Selenium Webdriver parameter executable_path deprecated | https://api.github.com/repos/langchain-ai/langchain/issues/9808/comments | 4 | 2023-08-26T21:15:49Z | 2023-12-02T16:04:57Z | https://github.com/langchain-ai/langchain/issues/9808 | 1,868,270,215 | 9,808 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain 0.0.274, Python
### Who can help?
@hwchase17 @agola
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When initializing the the SagemakerEndpointEmbeddings & SagemakerEndpoint class, I pass the client but still get the error: Could not load credentials to authenticate with AWS client. Please check that credentials in the specified profile name are valid. (type=value_error)
content_handler = ContentHandler_LLM()
embeddings = SagemakerEndpoint(
client= get_sagemaker_client(),
endpoint_name= endpointName,
model_kwargs = model_kwargs ,
content_handler=content_handler,
)
content_handler = ContentHandler_Embeddings()
embeddings = SagemakerEndpointEmbeddings(
client= get_sagemaker_client(),
endpoint_name= endpointName,
content_handler=content_handler,
)
As you can see from the documentation below, the validate_environment function does not use the client that I am passing and instead tries creating its own client which causes the issue:
SOURCE: https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/sagemaker_endpoint.html#SagemakerEndpointEmbeddings
SOURCE: https://api.python.langchain.com/en/latest/_modules/langchain/llms/sagemaker_endpoint.html#SagemakerEndpoint
### Expected behavior
Passing the client works with BedrockEmbeddings class. The validate_environment function checks if there is value in client, then it just returns the existing values. You can see from below snippet in the class:
SOURCE: https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/bedrock.html
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that AWS credentials to and python package exists in environment."""
if values["client"] is not None:
return values
Please fix this in both the SagemakerEndpointEmbeddings and SagemakerEndpoint classes to check if value already has client then just return the values as done in BedrockEmbeddings class above.
Meanwhile, is there a work around for this? | It should be possible to pass client to SagemakerEndpointEmbeddings & SagemakerEndpoint class | https://api.github.com/repos/langchain-ai/langchain/issues/9807/comments | 1 | 2023-08-26T21:11:58Z | 2023-12-02T16:05:02Z | https://github.com/langchain-ai/langchain/issues/9807 | 1,868,269,407 | 9,807 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import OpenAI
from langchain.agents import AgentExecutor
from langchain.agents.agent_types import AgentType
from langchain.chat_models import ChatOpenAI
db = SQLDatabase.from_uri("sqlite:///../../../../../notebooks/Chinook.db")
toolkit = SQLDatabaseToolkit(db=db, llm=OpenAI(temperature=0))
# how to connect to sql server instead of sqlite???
### Idea or request for content:
I need to connect with my sqlserver database, but the documention does not explain how to connect langchain database agent to sqlserver.
| DOC: How to Connect Database Agent with SqlServer | https://api.github.com/repos/langchain-ai/langchain/issues/9804/comments | 20 | 2023-08-26T17:45:27Z | 2024-08-06T16:07:46Z | https://github.com/langchain-ai/langchain/issues/9804 | 1,868,188,378 | 9,804 |
[
"hwchase17",
"langchain"
]
| ### System Info
Google Auth version 2.22.0
Python version 3.8.8
langchain version: 0.0.273
### Who can help?
@eyurtsev the GoogleDriveLoader does not seem to be working as expected. I get a error of AttributeError: 'Credentials' object has no attribute 'with_scopes'.
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I run the code below (and many other versions of the Google Drive code) to import data from Google Drive. However, no matter what I try I get an error of "AttributeError: 'Credentials' object has no attribute 'with_scopes'". The troubleshooting seems to indicate incompatibility with version of Google Auth and Python, but the versions are compatible. I have gone through all the documentation and tutorials online about connecting Google Drive to Langchain, and I have complete and verified every step multiple times. I am at a loss of what else to try to get langchain to connect to Google Drive. Thanks in advance for your help!!!
from langchain.chat_models import ChatOpenAI
from langchain.chains import RetrievalQA
from langchain.document_loaders import GoogleDriveLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma
folder_id = "1ExuF7GUaeDzJpuDn8ThH6t8LBcmjAKE_"
loader = GoogleDriveLoader(
folder_id=folder_id,
recursive=False
)
docs = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=4000, chunk_overlap=0, separators=[" ", ",", "\n"]
)
texts = text_splitter.split_documents(docs)
embeddings = OpenAIEmbeddings()
db = Chroma.from_documents(texts, embeddings)
retriever = db.as_retriever()
llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo")
qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever)
while True:
query = input("> ")
answer = qa.run(query)
print(answer)
### Expected behavior
I expect langchain to be able to connect to me Google Drive files and folders. | GoogleDriveLoader - AttributeError: 'Credentials' object has no attribute 'with_scopes' | https://api.github.com/repos/langchain-ai/langchain/issues/9803/comments | 2 | 2023-08-26T17:11:56Z | 2023-08-26T19:09:12Z | https://github.com/langchain-ai/langchain/issues/9803 | 1,868,177,575 | 9,803 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
May I inquire whether the vector stroe accommodates the provision for customized database connection pooling?
Typically, opening a database connection is an expensive operation, especially if the database is remote. Pooling keeps the connections active so that, when a connection is later requested, one of the active ones is used in preference to having to create another one.
### Suggestion:
Maybe it could be like this?
```python
engine = sqlalchemy.create_engine(connection_string, pool_size=10)
conn = engine.connect()
store = PGVector(
custom_connection=conn,
embedding_function=embeddings
)
``` | Issue: vector store supports custom database connections? | https://api.github.com/repos/langchain-ai/langchain/issues/9802/comments | 1 | 2023-08-26T17:04:57Z | 2023-10-28T07:03:20Z | https://github.com/langchain-ai/langchain/issues/9802 | 1,868,175,495 | 9,802 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Obsidian markdown documents frequently have additional metadata beyond what is in the frontmatter: tags within the document, and (for many users) dataview plugin values.
Add support for this.
### Motivation
Surfacing tags and dataview fields would unlock more abilities for self-querying obsidian data (https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/chroma_self_query)
### Your contribution
I plan to make a PR for this. | Add support tags and dataview fields to ObsidianLoader | https://api.github.com/repos/langchain-ai/langchain/issues/9800/comments | 2 | 2023-08-26T16:02:26Z | 2023-12-02T16:05:07Z | https://github.com/langchain-ai/langchain/issues/9800 | 1,868,155,973 | 9,800 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello, I tried to use AsyncFinalIteratorCallbackHandler (which inherits from AsyncCallbackHandler) to implement async streaming, then encountered following issue:
libs/langchain/langchain/callbacks/manager.py:301: RuntimeWarning: coroutine 'AsyncCallbackHandler.on_chat_model_start' was never awaited
getattr(handler, event_name)(*args, **kwargs)
following is the demo:
`
import os
import asyncio
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain.agents import AgentType
from langchain.agents import initialize_agent
from langchain.callbacks.streaming_aiter_final_only import AsyncFinalIteratorCallbackHandler
os.environ["OPENAI_API_KEY"] = "<your openai key>"
async_streaming_handler = AsyncFinalIteratorCallbackHandler()
memory = ConversationBufferMemory(
memory_key="chat_history", return_messages=True)
llm = ChatOpenAI(temperature=0, streaming=True, model="gpt-3.5-turbo-16k-0613")
agent_chain = initialize_agent(
[], llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True, memory=memory)
async def streaming():
result = agent_chain.run(
'how to gain good reviews from customer?', callbacks=[async_streaming_handler])
print(f"result: {result}")
while True:
token = await async_streaming_handler.queue.get()
print(f"async token: {token}")
await asyncio.sleep(0.1)
asyncio.run(streaming())
`
how can I deal with it, could anyone help me?
### Suggestion:
_No response_ | Issue: RuntimeWarning: coroutine 'AsyncCallbackHandler.on_chat_model_start' was never awaited | https://api.github.com/repos/langchain-ai/langchain/issues/9798/comments | 3 | 2023-08-26T12:46:53Z | 2023-12-02T16:05:12Z | https://github.com/langchain-ai/langchain/issues/9798 | 1,868,085,901 | 9,798 |
[
"hwchase17",
"langchain"
]
| null | Issue: what's the difference between this with RASA | https://api.github.com/repos/langchain-ai/langchain/issues/9792/comments | 4 | 2023-08-26T07:06:24Z | 2023-12-02T16:05:17Z | https://github.com/langchain-ai/langchain/issues/9792 | 1,867,947,019 | 9,792 |
[
"hwchase17",
"langchain"
]
| ### System Info
Colab environment
LangChain version: 0.0.152
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Please follow the tutorial [here](https://learn.activeloop.ai/courses/take/langchain/multimedia/46317672-using-the-open-source-gpt4all-model-locally) and run the code below to reproduce
```
template = """Question: {question}
Answer: Let's answer in two sentence while being funny."""
prompt = PromptTemplate(template=template, input_variables=["question"])
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm = GPT4All(model="./models/ggml-model-q4_0.bin", callback_manager=callback_manager, verbose=True)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What happens when it rains somewhere?"
llm_chain.run(question)
```
### Expected behavior
TypeError: GPT4All.generate() got an unexpected keyword argument 'n_ctx' | GPT4All callup failure | https://api.github.com/repos/langchain-ai/langchain/issues/9786/comments | 2 | 2023-08-26T05:44:11Z | 2023-12-18T23:48:24Z | https://github.com/langchain-ai/langchain/issues/9786 | 1,867,920,013 | 9,786 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Hi, Actually exists a document loader for Google Drive in the Python version, I would like to propose this feature in the javascript version, it is important to our company.
### Motivation
Hi, Actually exists a document loader for Google Drive in the Python version, I would like to propose this feature in the javascript version, it is important to our company.
### Your contribution
no :( I don´t have the knowledge | document loader google drive for javascript version | https://api.github.com/repos/langchain-ai/langchain/issues/9783/comments | 2 | 2023-08-26T02:19:29Z | 2023-12-02T16:05:22Z | https://github.com/langchain-ai/langchain/issues/9783 | 1,867,861,639 | 9,783 |
[
"hwchase17",
"langchain"
]
| ### System Info
pydantic==1.10.12
langchain==0.0.271
System: MacOS Ventura 13.5 (22G74)
Python 3.9.6
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Here is the main python file:
```
from dotenv import load_dotenv
load_dotenv()
from langchain.chat_models import ChatOpenAI
from langchain.document_loaders import PyPDFLoader
from langchain.agents import Tool
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import FAISS
from langchain.document_loaders import PyPDFLoader
from langchain.chains import RetrievalQA
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from pydantic import BaseModel, Field
class DocumentInput(BaseModel):
question: str = Field()
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")
tools = []
files = [
{
"name": "alphabet-earnings",
"path": "https://abc.xyz/investor/static/pdf/2023Q1_alphabet_earnings_release.pdf",
},
{
"name": "tesla-earnings",
"path": "https://digitalassets.tesla.com/tesla-contents/image/upload/IR/TSLA-Q1-2023-Update",
},
]
for file in files:
print(f"Loading {file['name']} with path {file['path']}")
loader = PyPDFLoader(file["path"])
pages = loader.load_and_split()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(pages)
embeddings = OpenAIEmbeddings()
retriever = FAISS.from_documents(docs, embeddings).as_retriever()
# Wrap retrievers in a Tool
tools.append(
Tool(
args_schema=DocumentInput,
name=file["name"],
description=f"useful when you want to answer questions about {file['name']}",
func=RetrievalQA.from_chain_type(llm=llm, retriever=retriever),
)
)
agent = initialize_agent(
agent=AgentType.OPENAI_FUNCTIONS,
tools=tools,
llm=llm,
verbose=True,
)
comparison = agent({"input": "which one has higher earning?"})
print(comparison)
print('-------------------------------')
comparison = agent({"input": "did alphabet or tesla have more revenue?"})
print(comparison)
```
The python version is 3.9.6 and pydantic==1.10.12 and langchain==0.0.271
Run the code using python main.py
Here is the output:
```
Loading alphabet-earnings with path https://abc.xyz/investor/static/pdf/2023Q1_alphabet_earnings_release.pdf
Loading tesla-earnings with path https://digitalassets.tesla.com/tesla-contents/image/upload/IR/TSLA-Q1-2023-Update
> Entering new AgentExecutor chain...
Which companies' earnings are you referring to?
> Finished chain.
{'input': 'which one has higher earning?', 'output': "Which companies' earnings are you referring to?"}
-------------------------------
> Entering new AgentExecutor chain...
Invoking: `alphabet-earnings` with `{'question': 'revenue'}`
{'query': 'revenue', 'result': 'The revenue for Alphabet Inc. for the quarter ended March 31, 2023, was $69,787 million.'}
Invoking: `tesla-earnings` with `{'question': 'revenue'}`
{'query': 'revenue', 'result': 'Total revenue for Q1-2023 was $23.3 billion.'}Alphabet Inc. had more revenue than Tesla. Alphabet's revenue for the quarter ended March 31, 2023, was $69,787 million, while Tesla's total revenue for Q1-2023 was $23.3 billion.
> Finished chain.
{'input': 'did alphabet or tesla have more revenue?', 'output': "Alphabet Inc. had more revenue than Tesla. Alphabet's revenue for the quarter ended March 31, 2023, was $69,787 million, while Tesla's total revenue for Q1-2023 was $23.3 billion."}
```
### Expected behavior
I expect that for the question 'which one has higher earning?' also gives a good answer just like it did when the question asked was 'did alphabet or tesla have more revenue?' instead.
I was following this guide: https://python.langchain.com/docs/integrations/toolkits/document_comparison_toolkit | RetrievalQA for document comparison is not working for similar type of questions. | https://api.github.com/repos/langchain-ai/langchain/issues/9780/comments | 2 | 2023-08-25T21:54:12Z | 2023-12-01T16:06:23Z | https://github.com/langchain-ai/langchain/issues/9780 | 1,867,724,844 | 9,780 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Some urls will return status code 403 if the right headers are not passed to the requests.get(path) function. My workaround was to provide the headers of that website to get the approval from the server. Would be great to be able to pass headers to PyMuPDFLoader and all the other web based document_loaders
headers = {
"User-Agent": "Chrome/109.0.5414.119",
"Referer": "https://www.ncbi.nlm.nih.gov" if 'ncbi' in self.file_path else None
}
r = requests.get(self.file_path, headers=headers)
the execution would be PyMuPDF(path, headers).load() and if it detects that headers is provided itll provide it downstream to 'get'
### Motivation
Some urls will return status code 403 if the right headers are not passed to the requests.get(path) function. My workaround was to provide the headers of that website to get the approval from the server. Would be great to be able to pass headers to PyMuPDFLoader and all the other web based document_loaders
Mainly an issue with websites like NCBI
### Your contribution
Not experienced enough | Pass headers arg (requests library) to loaders that fetch from web | https://api.github.com/repos/langchain-ai/langchain/issues/9778/comments | 3 | 2023-08-25T20:39:29Z | 2024-05-28T08:23:07Z | https://github.com/langchain-ai/langchain/issues/9778 | 1,867,657,651 | 9,778 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version 0.0.273
Python version 3.8.10
Ubuntu 20.04.5 LTS
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run the following code:
```
MY_MODEL_NAME="SUBSTITUTE_THIS_WITH_YOUR_OWN_MODEL_FOR_REPRODUCTION"
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
llm = ChatOpenAI(temperature=0.1, model=MY_MODEL_NAME)
memory = ConversationBufferMemory(return_messages=True)
conversation = ConversationChain(llm=llm, memory=memory)
message1 = "Howdy there"
response1 = conversation(message1)
print(response1)
message2 = "How's it going?"
response2 = conversation(message2)
print(response2)
```
Inspect the requests sent to the server. They will resemble the following packets received by my own server:
request1:
```
'messages': [{'role': 'user', 'content': 'The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n[]\nHuman: Howdy there\nAI:'}]
```
request2:
```
'messages': [{'role': 'user', 'content': "The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.\n\nCurrent conversation:\n[HumanMessage(content='Howdy there', additional_kwargs={}, example=False), AIMessage(content='Howdy! How can I help you today?\\n', additional_kwargs={}, example=False)]\nHuman: How's it going?\nAI:"}]
```
Note that these requests are malformatted.
### Expected behavior
There are two issues.
First, the `messages` packet is clearly malformatted, containing HumanMessage and AIMessage strings.
Second, the `messages` packet only has one conversation turn, and there appears to be no options within the ConversationChain class to allow for multiple turns.
This is particularly problematic as the ConversationChain class requires the user to know what turn tokens are appropriate to use. The user cannot and should not be expected to have knowledge of how the model was trained: there should be an option to leave this up to the server to decide.
My expected (and required for my product) behavior is for the two requests to be formatted as follows.
request1:
```
'messages': [{'role': 'user', 'content': 'Howdy there'}]
```
request2:
```
'messages': [{'role': 'user', 'content': 'Howdy there'}, {'role': 'assistant', 'content': 'Howdy! How can I help you today?\\n'}, {'role': 'user', 'content': "How's it going?"}]
```
Ultimately it is confusing why `conversation(message1); conversation(message2);` sends a different request to the server back-end than `llm([HumanMessage(content=message1), AIMessage(content=response1), HumanMessage(content=message2)])` does. | ConversationChain sends malformatted requests to server | https://api.github.com/repos/langchain-ai/langchain/issues/9776/comments | 3 | 2023-08-25T19:37:16Z | 2023-12-07T16:06:30Z | https://github.com/langchain-ai/langchain/issues/9776 | 1,867,592,455 | 9,776 |
[
"hwchase17",
"langchain"
]
| https://github.com/langchain-ai/langchain/blob/709a67d9bfcff475356924d8461140052dd418f7/libs/langchain/langchain/chains/qa_with_sources/base.py#L123
I've noticed that the retrieval qa chain doesn't always return SOURCES, it sometimes returns "Sources", "sources" or "source". | The RetrievalQAWithSourcesChain with the ExamplePrompt doesn't always return SOURCES as part of it's answers. | https://api.github.com/repos/langchain-ai/langchain/issues/9774/comments | 3 | 2023-08-25T17:57:20Z | 2023-12-02T16:05:32Z | https://github.com/langchain-ai/langchain/issues/9774 | 1,867,473,509 | 9,774 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Right now, the Sagemaker Inference Endpoint LLM does not allow for async requests and it's a performance bottleneck.
I have an API gateway set up such that I have a restful api endpoint backed by the sagemaker inference endpoint.
In an ideal world:
1. Langchain should allow for arbitrary http requests to a backend LLM of our choice fronted by your LLM interfaces. This way, we can standardize async calls for this sort of flow.
2. SagemakerEndpoint should allow for async requests.
Is this feasible?
Does this exist at the moment?
### Suggestion:
_No response_ | Issue: SagemakerEndpoint does not support async calls | https://api.github.com/repos/langchain-ai/langchain/issues/9773/comments | 1 | 2023-08-25T17:21:24Z | 2023-12-01T16:06:42Z | https://github.com/langchain-ai/langchain/issues/9773 | 1,867,424,178 | 9,773 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
The issue with PySpark messages is that they could become quite lengthy and after some iterations they could potentially causing problems with token limits. In this context, I would like to initiate a discussion about this topic and explore potential solutions.
### Suggestion:
My suggestion would be to generate a summary of the error message before returning. I am currently not deep enough into the langchain topic and maybe there are better options so feel free to comment.
It is regarding following part [langchain/libs/langchain/langchain/utilities/spark_sql.py](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/utilities/spark_sql.py):
```python
def run_no_throw(self, command: str, fetch: str = "all") -> str:
"""Execute a SQL command and return a string representing the results.
If the statement returns rows, a string of the results is returned.
If the statement returns no rows, an empty string is returned.
If the statement throws an error, the error message is returned.
"""
try:
return self.run(command, fetch)
except Exception as e:
"""Format the error message"""
return f"Error: {e}"
``` | PySpark error message and token limits in spark_sql | https://api.github.com/repos/langchain-ai/langchain/issues/9767/comments | 1 | 2023-08-25T14:22:11Z | 2023-12-01T16:06:48Z | https://github.com/langchain-ai/langchain/issues/9767 | 1,867,149,666 | 9,767 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.273
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am trying to use Azure Cognitive Search retriever, however it fails because our fields are different:
Our index looks like this:

Our code:
```
llm = AzureChatOpenAI(
openai_api_base=config['AZURE_OPENAI_ENDPOINT'],
openai_api_version=config['AZURE_OPENAI_API_VERSION'],
deployment_name=config['OPENAI_DEPLOYMENT_NAME'],
openai_api_key=config['AZURE_OPENAI_API_KEY'],
openai_api_type=config['OPENAI_API_TYPE'],
model_name=config['OPENAI_MODEL_NAME'],
temperature=0)
embeddings = OpenAIEmbeddings(
openai_api_base=config['AZURE_OPENAI_ENDPOINT'],
openai_api_type="azure",
deployment=config['AZURE_OPENAI_EMBEDDING_DEPLOYED_MODEL_NAME'],
openai_api_key=config['AZURE_OPENAI_API_KEY'],
model=config['AZURE_OPENAI_EMBEDDING_DEPLOYED_MODEL_NAME'],
chunk_size=1)
fields = [
SimpleField(
name="id",
type=SearchFieldDataType.String,
key=True,
filterable=True,
),
SearchableField(
name="text",
type=SearchFieldDataType.String,
searchable=True,
),
SearchField(
name="embedding",
type=SearchFieldDataType.Collection(SearchFieldDataType.Single),
searchable=True,
vector_search_dimensions=1536,
vector_search_configuration="default",
)
]
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=config['AZURE_SEARCH_SERVICE_ENDPOINT'],
azure_search_key=config['AZURE_SEARCH_ADMIN_KEY'],
index_name=config['AZURE_SEARCH_VECTOR_INDEX_NAME'],
embedding_function=embeddings.embed_query,
fields=fields,
)
retriever = vector_store.as_retriever(search_type="similarity", kwargs={"k": 3})
# Creating instance of RetrievalQA
chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True)
# Generating response to user's query
response = chain({"query": config['question']})
```
I traced it all down to the function: vector_search_with_score in azuresearch.py
```
results = self.client.search(
search_text="",
vectors=[
Vector(
value=np.array(
self.embedding_function(query), dtype=np.float32
).tolist(),
k=k,
fields=FIELDS_CONTENT_VECTOR,
)
],
select=[FIELDS_ID, FIELDS_CONTENT, FIELDS_METADATA],
filter=filters,
)
```
The code is trying to use FIELDS_CONTENT_VECTOR which is a constant and its not our field name.
I guess some other issues may arise with other parts of the code where constants are used.
Why do we have different field names?
We are using Microsoft examples to setup all azure indexing, indexers, skillsets and datasources:
https://github.com/Azure/cognitive-search-vector-pr/blob/main/demo-python/code/azure-search-vector-ingestion-python-sample.ipynb
and their open ai embedding generator deployed as an azure function:
https://github.com/Azure-Samples/azure-search-power-skills/blob/main/Vector/EmbeddingGenerator/README.md
I wrote a blog post series about this
https://medium.com/python-in-plain-english/elevate-chat-ai-applications-mastering-azure-cognitive-search-with-vector-storage-for-llm-a2082f24f798
### Expected behavior
I should be able to define the fields we want to use, but the code uses constants | AzureSearch.py is using constant field names instead of ours | https://api.github.com/repos/langchain-ai/langchain/issues/9765/comments | 14 | 2023-08-25T14:18:44Z | 2024-07-11T07:54:13Z | https://github.com/langchain-ai/langchain/issues/9765 | 1,867,141,839 | 9,765 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version : 0.0.273
Python version : 3.10.8
Platform : macOS 13.5.1
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
import langchain
from langchain.chat_models import ChatOpenAI
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.cache import InMemoryCache
langchain.llm_cache = InMemoryCache()
llm = ChatOpenAI(model_name="gpt-3.5-turbo", streaming=True, callbacks=[StreamingStdOutCallbackHandler()])
resp = llm.predict("Tell me a joke")
resp1 = llm.predict("Tell me a joke")
```
output:
```
Sure, here's a classic one for you:
Why don't scientists trust atoms?
Because they make up everything!
```
### Expected behavior
I'd expect both responses to be streamed to stdout but as the second one is coming from the memory cache, the callback handler `on_llm_new_token` is never called and thus the second response never printed.
I guess `on_llm_new_token` should be called once with the full response loaded from cache to ensure a consistent behavior between new and cached responses. | Streaming doesn't work correctly with caching | https://api.github.com/repos/langchain-ai/langchain/issues/9762/comments | 5 | 2023-08-25T13:35:30Z | 2024-04-23T09:58:26Z | https://github.com/langchain-ai/langchain/issues/9762 | 1,867,071,726 | 9,762 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.191
llama-cpp-python==0.1.78
chromadb==0.3.22
python3.10
wizard-vicuna-13B.ggmlv3.q4_0.bin
### Who can help?
@hwchase17 @agol
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have a slightly modified version of privateGPT running. I am facing a weird issue where the SelfQueryRetriever is creating attributes that do not exist in my ChromaDB. This is crashing the app when running the RetrievalQA with error `chromadb.errors.NoDatapointsException: No datapoints found for the supplied filter`. I have provided a list of the attributes that exists in my DB, but still the SelfQueryRetreiever is creating filters on metadata that does not exist.
To reproduce the problem, use the wizard-vicuna-13B.ggmlv3.q4_0.bin model provided by TheBloke/wizard-vicuna-13B-GGML on HuggingFace and run the below code. I don't think the choice of model has an impact here. The issue I am facing is the creation of metadata filters that do not exist.
Is there a way to limit the attributes allowed by the SelfQueryRetriever?
```python
import logging
import click
import torch
from auto_gptq import AutoGPTQForCausalLM
from huggingface_hub import hf_hub_download
from langchain.chains import RetrievalQA
from langchain.embeddings import HuggingFaceInstructEmbeddings
from langchain.llms import HuggingFacePipeline, LlamaCpp
from langchain.memory import ConversationBufferMemory
from langchain.prompts import PromptTemplate
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chains.query_constructor.base import AttributeInfo
import time
# from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.vectorstores import Chroma
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
GenerationConfig,
LlamaForCausalLM,
LlamaTokenizer,
pipeline,
)
from constants import CHROMA_SETTINGS, EMBEDDING_MODEL_NAME, PERSIST_DIRECTORY, MODEL_ID, MODEL_BASENAME
SEED = 42
def load_model(device_type, model_id, model_basename=None, local_model: bool = False, local_model_path: str = None):
"""
Select a model for text generation using the HuggingFace library.
If you are running this for the first time, it will download a model for you.
subsequent runs will use the model from the disk.
Args:
device_type (str): Type of device to use, e.g., "cuda" for GPU or "cpu" for CPU.
model_id (str): Identifier of the model to load from HuggingFace's model hub.
model_basename (str, optional): Basename of the model if using quantized models.
Defaults to None.
Returns:
HuggingFacePipeline: A pipeline object for text generation using the loaded model.
Raises:
ValueError: If an unsupported model or device type is provided.
"""
if local_model:
logging.info(f'Loading local model at {local_model_path}')
else:
logging.info(f"Loading Model: {model_id}, on: {device_type}")
logging.info("This action can take a few minutes!")
if model_basename is not None:
# if "ggml" in model_basename:
if ".ggml" in model_basename:
logging.info("Using Llamacpp for GGML quantized models")
if local_model:
model_path = local_model_path
else:
model_path = hf_hub_download(repo_id=model_id, filename=model_basename)
max_ctx_size = 2048
kwargs = {
"model_path": model_path,
"n_ctx": max_ctx_size,
"max_tokens": max_ctx_size,
}
if device_type.lower() == "mps":
kwargs["n_gpu_layers"] = 1000
if device_type.lower() == "cuda":
kwargs['seed'] = SEED
kwargs["n_gpu_layers"] = 40
return LlamaCpp(**kwargs)
else:
# The code supports all huggingface models that ends with GPTQ and have some variation
# of .no-act.order or .safetensors in their HF repo.
logging.info("Using AutoGPTQForCausalLM for quantized models")
if ".safetensors" in model_basename:
# Remove the ".safetensors" ending if present
model_basename = model_basename.replace(".safetensors", "")
tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True)
logging.info("Tokenizer loaded")
model = AutoGPTQForCausalLM.from_quantized(
model_id,
model_basename=model_basename,
use_safetensors=True,
trust_remote_code=True,
device="cuda:0",
use_triton=False,
quantize_config=None,
)
elif (
device_type.lower() == "cuda"
): # The code supports all huggingface models that ends with -HF or which have a .bin
# file in their HF repo.
logging.info("Using AutoModelForCausalLM for full models")
tokenizer = AutoTokenizer.from_pretrained(model_id)
logging.info("Tokenizer loaded")
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
trust_remote_code=True,
# max_memory={0: "15GB"} # Uncomment this line with you encounter CUDA out of memory errors
)
model.tie_weights()
else:
logging.info("Using LlamaTokenizer")
tokenizer = LlamaTokenizer.from_pretrained(model_id)
model = LlamaForCausalLM.from_pretrained(model_id)
# Load configuration from the model to avoid warnings
generation_config = GenerationConfig.from_pretrained(model_id)
# see here for details:
# https://huggingface.co/docs/transformers/
# main_classes/text_generation#transformers.GenerationConfig.from_pretrained.returns
# Create a pipeline for text generation
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_length=2048,
temperature=0,
top_p=0.95,
repetition_penalty=1.15,
generation_config=generation_config,
)
local_llm = HuggingFacePipeline(pipeline=pipe)
logging.info("Local LLM Loaded")
return local_llm
# chose device typ to run on as well as to show source documents.
@click.command()
@click.option(
"--device_type",
default="cuda" if torch.cuda.is_available() else "cpu",
type=click.Choice(
[
"cpu",
"cuda",
"ipu",
"xpu",
"mkldnn",
"opengl",
"opencl",
"ideep",
"hip",
"ve",
"fpga",
"ort",
"xla",
"lazy",
"vulkan",
"mps",
"meta",
"hpu",
"mtia",
],
),
help="Device to run on. (Default is cuda)",
)
@click.option(
"--show_sources",
"-s",
is_flag=True,
help="Show sources along with answers (Default is False)",
)
@click.option(
"--local_model",
"-lm",
is_flag=True,
help="Use local model (Default is False)",
)
@click.option(
"--local_model_path",
"-lmp",
default=None,
help="Path to local model. (Default is None)",
)
def main(device_type, show_sources, local_model: bool = False, local_model_path: str = None):
"""
This function implements the information retrieval task.
1. Loads an embedding model, can be HuggingFaceInstructEmbeddings or HuggingFaceEmbeddings
2. Loads the existing vectorestore that was created by inget.py
3. Loads the local LLM using load_model function - You can now set different LLMs.
4. Setup the Question Answer retreival chain.
5. Question answers.
"""
logging.info(f"Running on: {device_type}")
logging.info(f"Display Source Documents set to: {show_sources}")
embeddings = HuggingFaceInstructEmbeddings(model_name=EMBEDDING_MODEL_NAME, model_kwargs={"device": device_type})
# uncomment the following line if you used HuggingFaceEmbeddings in the ingest.py
# embeddings = HuggingFaceEmbeddings(model_name=EMBEDDING_MODEL_NAME)
# load the vectorstore
db = Chroma(
persist_directory=PERSIST_DIRECTORY,
embedding_function=embeddings,
client_settings=CHROMA_SETTINGS,
)
template = """Use the following pieces of context to answer the question at the end. If you don't know the answer,\
just say that you don't know, don't try to make up an answer.
{context}
{history}
Question: {question}
Helpful Answer:"""
prompt = PromptTemplate(input_variables=["history", "context", "question"], template=template)
memory = ConversationBufferMemory(input_key="question", memory_key="history")
llm = load_model(
device_type, model_id=MODEL_ID, model_basename=MODEL_BASENAME, local_model=local_model,
local_model_path=local_model_path)
metadata_field_info = [
AttributeInfo(
name='country',
description='The country name.',
type='string'
),
AttributeInfo(
name='source',
description='Filename and location of the source file.',
type='string'
)
]
retriever = SelfQueryRetriever.from_llm(
llm=llm,
vectorstore=db,
document_contents='News, policies, and laws for various countries.',
metadata_field_info=metadata_field_info,
verbose=True,
)
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True,
chain_type_kwargs={"prompt": prompt, "memory": memory},
)
# Interactive questions and answers
while True:
query = input("\nEnter a query: ")
if query == "exit":
break
# Get the answer from the chain
start = time.time()
res = qa(query)
answer, docs = res["result"], res["source_documents"]
# Print the result
print(f'Time: {time.time() - start}')
print("\n\n> Question:")
print(query)
print("\n> Answer:")
print(answer)
if show_sources: # this is a flag that you can set to disable showing answers.
# # Print the relevant sources used for the answer
print("----------------------------------SOURCE DOCUMENTS---------------------------")
for document in docs:
print("\n> " + document.metadata["source"] + ":")
print(document.page_content)
print("----------------------------------SOURCE DOCUMENTS---------------------------")
if __name__ == "__main__":
logging.basicConfig(
format="%(asctime)s - %(levelname)s - %(filename)s:%(lineno)s - %(message)s", level=logging.INFO
)
main()
```
```bash
2023-08-25 12:53:40,256 - INFO - run_localGPT.py:209 - Running on: cuda
2023-08-25 12:53:40,256 - INFO - run_localGPT.py:210 - Display Source Documents set to: True
2023-08-25 12:53:40,397 - INFO - SentenceTransformer.py:66 - Load pretrained SentenceTransformer: hkunlp/instructor-large
load INSTRUCTOR_Transformer
max_seq_length 512
2023-08-25 12:53:42,924 - INFO - __init__.py:88 - Running Chroma using direct local API.
2023-08-25 12:53:42,929 - WARNING - __init__.py:43 - Using embedded DuckDB with persistence: data will be stored in: /home/waleedalfaris/localGPT/DB
2023-08-25 12:53:42,934 - INFO - ctypes.py:22 - Successfully imported ClickHouse Connect C data optimizations
2023-08-25 12:53:42,937 - INFO - json_impl.py:45 - Using python library for writing JSON byte strings
2023-08-25 12:53:47,543 - INFO - duckdb.py:460 - loaded in 129337 embeddings
2023-08-25 12:53:47,545 - INFO - duckdb.py:472 - loaded in 1 collections
2023-08-25 12:53:47,546 - INFO - duckdb.py:89 - collection with name langchain already exists, returning existing collection
2023-08-25 12:53:47,546 - INFO - run_localGPT.py:50 - Loading local model at /home/waleedalfaris/localGPT/models/wizard-vicuna-13B.ggmlv3.q4_0.bin
2023-08-25 12:53:47,546 - INFO - run_localGPT.py:53 - This action can take a few minutes!
2023-08-25 12:53:47,546 - INFO - run_localGPT.py:58 - Using Llamacpp for GGML quantized models
ggml_init_cublas: found 1 CUDA devices:
Device 0: Tesla T4, compute capability 7.5
llama.cpp: loading model from /home/waleedalfaris/localGPT/models/wizard-vicuna-13B.ggmlv3.q4_0.bin
llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 5120
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 40
llama_model_load_internal: n_head_kv = 40
llama_model_load_internal: n_layer = 40
llama_model_load_internal: n_rot = 128
llama_model_load_internal: n_gqa = 1
llama_model_load_internal: rnorm_eps = 5.0e-06
llama_model_load_internal: n_ff = 13824
llama_model_load_internal: freq_base = 10000.0
llama_model_load_internal: freq_scale = 1
llama_model_load_internal: ftype = 2 (mostly Q4_0)
llama_model_load_internal: model size = 13B
llama_model_load_internal: ggml ctx size = 0.11 MB
llama_model_load_internal: using CUDA for GPU acceleration
llama_model_load_internal: mem required = 669.91 MB (+ 1600.00 MB per state)
llama_model_load_internal: allocating batch_size x (640 kB + n_ctx x 160 B) = 480 MB VRAM for the scratch buffer
llama_model_load_internal: offloading 40 repeating layers to GPU
llama_model_load_internal: offloaded 40/43 layers to GPU
llama_model_load_internal: total VRAM used: 7288 MB
llama_new_context_with_model: kv self size = 1600.00 MB
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 |
Enter a query: what is the penalty for cybercrime in the United Arab Emirates?
llama_print_timings: load time = 786.98 ms
llama_print_timings: sample time = 158.14 ms / 196 runs ( 0.81 ms per token, 1239.39 tokens per second)
llama_print_timings: prompt eval time = 83050.84 ms / 920 tokens ( 90.27 ms per token, 11.08 tokens per second)
llama_print_timings: eval time = 22099.62 ms / 195 runs ( 113.33 ms per token, 8.82 tokens per second)
llama_print_timings: total time = 105962.51 ms
query=' ' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='country', value='United Arab Emirates'), Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='penalty', value='cybercrime')]) limit=None
Traceback (most recent call last):
File "/home/waleedalfaris/localGPT/run_localGPT.py", line 302, in <module>
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/home/waleedalfaris/localGPT/run_localGPT.py", line 279, in main
query = input("\nEnter a query: ")
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 140, in __call__
raise e
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 134, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/langchain/chains/retrieval_qa/base.py", line 119, in _call
docs = self._get_docs(question)
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/langchain/chains/retrieval_qa/base.py", line 181, in _get_docs
return self.retriever.get_relevant_documents(question)
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/langchain/retrievers/self_query/base.py", line 90, in get_relevant_documents
docs = self.vectorstore.search(new_query, self.search_type, **search_kwargs)
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 81, in search
return self.similarity_search(query, **kwargs)
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 182, in similarity_search
docs_and_scores = self.similarity_search_with_score(query, k, filter=filter)
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 230, in similarity_search_with_score
results = self.__query_collection(
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/langchain/utils.py", line 53, in wrapper
return func(*args, **kwargs)
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 121, in __query_collection
return self._collection.query(
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/chromadb/api/models/Collection.py", line 219, in query
return self._client._query(
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/chromadb/api/local.py", line 408, in _query
uuids, distances = self._db.get_nearest_neighbors(
File "/home/waleedalfaris/localGPT/venv/lib/python3.10/site-packages/chromadb/db/clickhouse.py", line 576, in get_nearest_neighbors
raise NoDatapointsException(
chromadb.errors.NoDatapointsException: No datapoints found for the supplied filter {"$and": [{"country": {"$eq": "United Arab Emirates"}}, {"penalty": {"$eq": "cybercrime"}}]}
2023-08-25 13:05:24,584 - INFO - duckdb.py:414 - Persisting DB to disk, putting it in the save folder: /home/waleedalfaris/localGPT/DB
```
### Expected behavior
Result of SelfQueryRetriever should only contain filters country with a value of United Arab Emirates and query should not be blank. Should have an ouptut similar to `query='cybercrime penalty' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='country', value='United Arab Emirates') limit=None` | SelfQueryRetriever creates attributes that do not exist | https://api.github.com/repos/langchain-ai/langchain/issues/9761/comments | 5 | 2023-08-25T13:31:11Z | 2024-01-12T11:56:38Z | https://github.com/langchain-ai/langchain/issues/9761 | 1,867,063,910 | 9,761 |
[
"hwchase17",
"langchain"
]
| ### System Info
- LangChain version: 0.0.105
- Platform: Macbook Pro M1 - Mac OS Ventura
- Node.js version: v18.17.1
- qdrant/js-client-rest: 1.4.0
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**Related Components:**
- Vector Stores / Retrievers
- JavaScript LangChain Qdrant Wrapper
**Information:**
The issue arises when attempting to perform a semantic search using the Qdrant wrapper in LangChain through JavaScript. The provided code snippet is as follows:
```javascript
const embeddings = new OpenAIEmbeddings({ openAIApiKey: process.env.OPENAI_API_KEY })
const vectorStore = await QdrantVectorStore.fromExistingCollection(
embeddings,
{
url: process.env.QDRANT_URL,
collectionName: process.env.QDRANT_COLLECTION_NAME
})
const results = await vectorStore.similaritySearch("some query", 4)
```
The problem is that the `results` list of Documents contains undefined `pageContent`, while the metadata is fetched correctly. Strangely, when performing the same operation using the Python LangChain Qdrant wrapper, the `page_content` and `metadata` are both retrieved from the same Qdrant vectorstore correctly.
**Reproduction:**
To reproduce the issue, follow these steps:
1. Use the provided code snippet to perform a semantically search using the JavaScript LangChain Qdrant wrapper.
2. Examine the `results` list of Documents and observe that the `pageContent` property is undefined.
3. Compare the results with the results from the python equivalent code snippet:
```python
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Qdrant
from qdrant_client import QdrantClient
qdrant_client = QdrantClient(
api_key=os.getenv("QDRANT_API_KEY"),
url=os.getenv("QDRANT_URL")
)
# get existing Qdrant vectorstore
vectorstore = Qdrant(
embeddings=openai_embeddings,
client=qdrant_client,
collection_name=os.getenv("QDRANT_COLLECTION_NAME"),
)
vectorstore.similarity_search(query='some query', k=4)
```
Please assist in resolving this discrepancy in the behavior between the two wrappers.
### Expected behavior
The expected behavior is that when performing a semantically search using the JavaScript LangChain Qdrant wrapper, the `results` list of Documents should contain valid `pageContent` along with correct metadata, similar to the behavior in the Python LangChain Qdrant wrapper.
Expected result (works with the python Qdrant wrapper):
```bash
[Document(page_content='\n Some content of a document ', metadata={'source': 'https://some.source.com', 'title': 'some title'})
...
]
```
Actual result:
```bash
[Document(page_content=undefined, metadata={'source': 'https://some.source.com', 'title': 'some title'})
...
]
``` | JavaScript LangChain Qdrant semantic search results: pageContent in each Document is undefined | https://api.github.com/repos/langchain-ai/langchain/issues/9760/comments | 3 | 2023-08-25T13:08:16Z | 2023-08-25T13:27:22Z | https://github.com/langchain-ai/langchain/issues/9760 | 1,867,029,264 | 9,760 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello,
have two separate docker images as follows.
1. for the purpose of Loading documents and then tokenizing and creating embeddings and storing in Vector store DB like this `vector_store_db:Weaviate = weaviateInstance.from_documents(documents, self.embeddings, by_text=False) `
2. Another docker image running FASTAPI which receives the actual Query. Now we want to be ablet to store the `vector_store_db` (created by the first docker image) in Redis Store so that the second docker image can get the `vector_store_db` from REDIS and execute the Query against it by invoking function like `similar_doc=vector_store_db.similarity_search("Question ?",k=1)`
we tried number of options to be able to store the `vector_store_db` ( which is of type Weaviate as per the documentation here [https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.weaviate.Weaviate.html#langchain.vectorstores.weaviate.Weaviate.from_texts](url) ) REDIS , but we are getting SERIALIZATION issues. We tried ` pickle, dill and json` and NO luck yet.
And then we came across the issue link [https://github.com/langchain-ai/langchain/issues/9288](url) and we tried the `dumps()` option and our relevant code snippet looks like this
```vs_redis_obj = dumps(vector_store_db)
redis_client = redis.Redis(host='localhost', port=6379, encoding="utf-8", decode_responses=True)
redis_client.set("ourkey", vs_serialized)
vs_obj:Weaviate = redis_client.get("ourkey")
#Start: sample code for Querying the Vector store DB
similar_doc=vector_store_db.similarity_search("Who is trying to invade earth?",k=1)
```
but we are getting the error `Error :'str' object has no attribute 'similarity_search'`
Basically we kind of get why the error is coming because of the following....
1. when we store the object `vector_store_db` in REDIS its getting serialized to its `str` equivalent.
2. So when we do redis.get() we get the vector_store_db to be its `str` equivalent. And this is the reason why our call to `similarity_search()` fails .
Any ideas how can we fix this please.
Basically we need to be able to make the vector_store_db (created by one docker image) to another docker image through REDIS,
Any help / suggestion is much appreciated and thanks in advance.
### Suggestion:
_No response_ | Issue: To be able to store Weaviate (for that matter any vector store) Vector Store in REDIS | https://api.github.com/repos/langchain-ai/langchain/issues/9758/comments | 10 | 2023-08-25T12:46:04Z | 2023-12-03T16:04:41Z | https://github.com/langchain-ai/langchain/issues/9758 | 1,866,995,735 | 9,758 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Hey wanted to request a feature within the map-reduce chain where a person can send his own list of chunks data of a corpus instead of creating chunks of data based on sending a textsplitter and a corpus.
### Motivation
Sometime one might see map-reduce cases where one wants to use their own chunks of data and don't want to split a data corpus based on sending a textsplitter or character splitter.
### Your contribution
I can work on it by raising a PR. | Custom Map-Reduce chain | https://api.github.com/repos/langchain-ai/langchain/issues/9757/comments | 2 | 2023-08-25T12:39:58Z | 2023-12-01T16:07:08Z | https://github.com/langchain-ai/langchain/issues/9757 | 1,866,987,071 | 9,757 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
can i load my local model by chain = LLMChain(llm=chat, prompt=chat_prompt)
### Suggestion:
_No response_ | Issue: can i load my local model by chain = LLMChain(llm=chat, prompt=chat_prompt) | https://api.github.com/repos/langchain-ai/langchain/issues/9752/comments | 4 | 2023-08-25T09:51:51Z | 2023-12-01T16:07:13Z | https://github.com/langchain-ai/langchain/issues/9752 | 1,866,736,311 | 9,752 |
[
"hwchase17",
"langchain"
]
| ### System Info
## Description:
### Context:
I'm using LangChain to develop an application that interacts with the gpt-3.5-turbo-16k model to handle long chains of up to 16384 tokens.
### Problem:
While the first message processes quickly **(specially if i have not previues context in the inputs)**, from the second message onward, I experience excessively long execution times, exceeding 5 minutes. On occasions, I receive timeout errors exceeding 10 minutes like the following:
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 1.0 seconds as it raised Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=600).`
#### **It's worth noting that when using the OpenAI API directly with the same context and length, the response is almost immediate.**
### Relevant Code:
```
from langchain.chains.conversation.prompt import ENTITY_MEMORY_CONVERSATION_TEMPLATE
from langchain.chat_models import ChatOpenAI
from langchain.memory.entity import ConversationEntityMemory
def create_conversation_chain(inputs, num_msgs=3):
"""
Creates the base instance for the conversation with the llm and the memory
:param num_msgs: Number of messages to include in the memory buffer
:return: The conversation chain instance
"""
load_dotenv()
llm = ChatOpenAI(
temperature=0,
model_name=MODEL,
verbose=False,
)
memory = ConversationEntityMemory(
llm=llm,
k=num_msgs,
)
if inputs:
for inp in inputs:
memory.save_context(inp[0], inp[1])
conversation = ConversationChain(
llm=llm,
memory=memory,
prompt=ENTITY_MEMORY_CONVERSATION_TEMPLATE,
verbose=True,
)
return conversation
conversation = create_conversation_chain(inputs=self.input_msgs_entries, num_msgs=num_msgs_to_include)
ans = self.conversation.predict(input=msg)
```
Feel free to send me questions about my code if you need to know something else, but essentialy that is what I have
### Additional Details:
1. Operating System: Windows 10
2. Python Version: 3.10
3. LangChain Version: 0.0.271
4. I've tried with the streaming=True parameter cause I saw that in other issue, but the results remain the same.
### Request:
Could you help me identify and address the cause of these prolonged execution times when using LangChain, especially compared to direct use of the OpenAI API?
Thank you very much for your help!! ^^
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
## Steps to Reproduce the Behavior:
1. Setup Environment:
> - Install LangChain version 0.0.271 on a Windows 10 (but i tryed it in ubuntu and same problems) machine with Python 3.10.
> - gpt-3.5-turbo-16k model.
2. Implement the Conversation Chain:
> - Utilize the create_conversation_chain function provided in the initial description.
3. Provide Context:
> - Use a context (inputs) that is long enough to approach the 6000 to 9000 (context + msg) tokens but i am also getting this time consuming responses with lower bounds, I let you bellow one file with text (it is spanish text) that I use as an input to call gpt-3.5-turbo-16k.
4. Initialize and Predict:
> - Call the function:
> `conversation = create_conversation_chain(inputs=input_msgs_entries, num_msgs=num_msgs_to_include_in_buffer)`
> - Then, predict using:
> `ans = conversation.predict(input=msg)`
5. Observe Delay:
> - Note that while the first message processes quickly **(specially if i have not previues context in the inputs)**, subsequent messages experience prolonged execution times, sometimes exceeding 10 minutes.
> - Occasionally, timeout errors might occur, indicating a failure in the request due to excessive waiting time.
6. Compare with Direct OpenAI API:
> - Directly send the same context and message to the OpenAI API, without using LangChain.
> - Observe that the response is almost immediate, highlighting the difference in performance.
7. Test with Streaming:
> - Set the streaming=True parameter and observe that the prolonged execution times persist.
[test_random_conv_text.txt](https://github.com/langchain-ai/langchain/files/12437477/test_random_conv_text.txt)
### Expected behavior
## Expected Behavior:
When utilizing the create_conversation_chain function with the gpt-3.5-turbo-16k model to handle chains close to 16384 tokens:
1. **Consistent Performance:** The execution times for each message, regardless of it being the first or subsequent ones, should be relatively consistent and not show drastic differences.
2. **Reasonable Response Times:** Even for longer contexts approaching the model's token limit, the response time should be within a reasonable range, certainly not exceeding 10 minutes for a single prediction.
3. **No Timeout Errors:** The system should handle the requests efficiently, avoiding timeout errors, especially if the direct OpenAI API call with the same context responds almost immediately.
4. **Memory Efficiency:** The memory management system, especially when handling the context and inputs, should efficiently store and retrieve data without causing significant delays. | Prolonged execution times when using ConversationChain and ChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/9750/comments | 4 | 2023-08-25T09:08:55Z | 2023-12-27T16:05:53Z | https://github.com/langchain-ai/langchain/issues/9750 | 1,866,661,448 | 9,750 |
[
"hwchase17",
"langchain"
]
| ### Discussed in https://github.com/langchain-ai/langchain/discussions/9605
<div type='discussions-op-text'>
<sup>Originally posted by **nima-cp** August 22, 2023</sup>
Hello everyone, I wanna have a Q&A over some documents including pdf, xml and csv. I'm having some difficulty to write a DirectoryLoader for different types of files in a folder. I'm using Chroma and couldn't find a way to solve this. However, I found this in TS documentation:
```typescript
const directoryLoader = new DirectoryLoader(filePath, {
'.pdf': (path) => new PDFLoader(path, { splitPages: true }),
'.docx': (path) => new DocxLoader(path),
'.json': (path) => new JSONLoader(path, '/texts'),
'.jsonl': (path) => new JSONLinesLoader(path, '/html'),
'.txt': (path) => new TextLoader(path),
'.csv': (path) => new CSVLoader(path, 'text'),
'.htm': (path) => new UnstructuredLoader(path),
'.html': (path) => new UnstructuredLoader(path),
'.ppt': (path) => new UnstructuredLoader(path),
'.pptx': (path) => new UnstructuredLoader(path),
});
```
How can I write the same in Python?
```python
loader = DirectoryLoader(
filePath,
glob="./*.pdf",
loader_cls=PyMuPDFLoader,
show_progress=True,
)
```</div> | DirectoryLoader for different file types | https://api.github.com/repos/langchain-ai/langchain/issues/9749/comments | 5 | 2023-08-25T09:03:10Z | 2024-04-22T10:04:26Z | https://github.com/langchain-ai/langchain/issues/9749 | 1,866,651,760 | 9,749 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I would like to have BambooHR Tool here(https://python.langchain.com/docs/integrations/tools/).
To make it possible to request information about employees within the company, their vacations, and so on.
### Motivation
BambooHR is quite a popular service, so I believe this tool will be used a lot.
### Your contribution
I am willing to contribute by coding a portion, but I am hesitant to code everything as it may be too much. It would be great if other enthusiasts could also join in.
I've already found the BambooHR OpenAPI file
[bamboo_openapi.json.zip](https://github.com/langchain-ai/langchain/files/12437406/bamboo_openapi.json.zip)
| BambooHR Tool | https://api.github.com/repos/langchain-ai/langchain/issues/9748/comments | 16 | 2023-08-25T08:56:25Z | 2023-12-01T16:07:18Z | https://github.com/langchain-ai/langchain/issues/9748 | 1,866,641,171 | 9,748 |
[
"hwchase17",
"langchain"
]
| Hi, I would like to build a chat bot to support multple users to access the chat bot.
Since the llm model size is very big, my VRAM can only load only one copy of LLM.
I would like to know if there is any way to load the model once and multiple access concurrently.
Here is what I just tried.
I tried to create two threads, and each thread run the llm model with different prompt.
Unfortunately, the responses are very strange. The r1 and r2 are gibberish code.
If I remove one of the thread, the response is good.
`
from langchain.llms import CTransformers
from langchain.llms import LlamaCpp
from langchain import PromptTemplate, LLMChain
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
import yaml
import os
import threading
import datetime
import time
def job1():
print("job1: ", datetime.datetime.now())
question1 = "Please introduce the history of China"
r1 = llm(question1)
print("r1:", r1)
def job2():
print("job2: ", datetime.datetime.now())
question2 = "Please introduce the history of The United States"
r2 = llm(question2)
print("r2:", r2)
// load the model once
llm = LlamaCpp(
model_path="/workspace/test/llama-2-7b-combined/ggml-model-q6_K.bin",
n_gpu_layers=20,
n_batch=128,
n_ctx=2048,
temperature=0.1,
max_tokens=512,
)
t1 = threading.Thread(target=job1)
t2 = threading.Thread(target=job2)
t1.start()
t2.start()
t1.join()
t2.join()
print("Done.")
`
### Suggestion:
_No response_ | Issue: LLM Multiple access problem | https://api.github.com/repos/langchain-ai/langchain/issues/9747/comments | 4 | 2023-08-25T08:42:02Z | 2023-12-01T16:07:23Z | https://github.com/langchain-ai/langchain/issues/9747 | 1,866,620,005 | 9,747 |
[
"hwchase17",
"langchain"
]
| ### Feature request
1. Ideally, current `input_variables` should be separated into `required_variables` and `allowed_variables`
2. `allowed_variables` should consist of `required_variables` + `optional_variables`
3. Current implementation of `format_document` requires some overhaul, as suggested by #7239. Since `format_document` is part of the schema, it should either be a class or, at least, its formatting and validation parts should be separated.
```python
# from:
def format_document(doc: Document, prompt: BasePromptTemplate) -> str:
base_info = {"page_content": doc.page_content, **doc.metadata}
missing_metadata = set(prompt.input_variables).difference(base_info)
if len(missing_metadata) > 0:
required_metadata = [
iv for iv in prompt.input_variables if iv != "page_content"
]
raise ValueError(
f"Document prompt requires documents to have metadata variables: "
f"{required_metadata}. Received document with missing metadata: "
f"{list(missing_metadata)}."
)
document_info = {k: base_info[k] for k in prompt.input_variables}
return prompt.format(**document_info)
# into (assumes required_variables is input_variables - optional_variables, backward compatible, not ideal or elegant so far):
def _validate_document(doc: Document, prompt: BasePromptTemplate) -> None:
base_info = {"page_content": doc.page_content, **doc.metadata}
missing_metadata = set(prompt.required_variables).difference(base_info)
if missing_metadata:
raise ValueError(
f"Document prompt requires documents to have metadata variables: "
f"{prompt.required_variables}. Received document with missing metadata: "
f"{list(missing_metadata)}."
)
def _format_document(doc: Document, prompt: BasePromptTemplate) -> None:
base_info = {"page_content": doc.page_content, **doc.metadata}
document_info = {k: base_info[k] for k in prompt.input_variables} # or allowed_variables
return prompt.format(**document_info)
def format_document(doc: Document, prompt: BasePromptTemplate, validation_function: Callable = _validate_document, formatting_function: Callable = _format_document, **kwargs) -> str:
_validate_document(doc, prompt)
return _format_document(doc, prompt, **kwargs) # format_kwargs?
```
### Motivation
Given that both `f-string` and `jinja2` support some control logic, it seems quite logical to allow optional variables, or to make `format_document` function more customizable.
### Your contribution
I'd like to work on it, but I believe there's a need for further discussion. | Add `optional_variables` for templates and make `format_document` customizable | https://api.github.com/repos/langchain-ai/langchain/issues/9746/comments | 0 | 2023-08-25T07:46:15Z | 2023-08-28T08:18:56Z | https://github.com/langchain-ai/langchain/issues/9746 | 1,866,525,826 | 9,746 |
[
"hwchase17",
"langchain"
]
| ### System Info
As stated in the title, the query is returning, but not the relevant documents. The code snippet below illustrates the issue:
```python
query = 'building bridges'
filter = Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='construction_material', value='steel')
limit = None
[]
```
I know that the documents and data are stored correctly because the query I am using works fine with similarity_search, and it returns the appropriate text. After splitting the PDF, I had to recreate the metadata and added it along with the documents. The meta_data field prints off without any problems when I access it in the similarity_search as well.
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**### Steps to Reproduce the Behavior:**
1. **Load PDF:** Begin by loading the PDF file that you will be working with.
2. **Append page_content:** Next, append the content from the PDF pages into an empty string called text.
3. **Split text Recursively:** Split the text string recursively to segment the content.
4. **Create Metadata for Split Text:** Use the following function to create metadata for the split text.
```python
def create_metadata(title: str, section: int, material: str) -> dict:
metadata = {
"title": title,
"section": section,
"material": material,
}
return metadata
```
5. **Loop Over Split Text:** Iterate through the split text, appending custom metadata to a list.
6. **Add Docs, Embeddings, Metadata:** Utilize the Chroma.from_texts method with the following parameters:
```python
vectordb = Chroma.from_texts(
texts=docs,
embedding=embedding,
persist_directory=persist_directory,
metadatas=metadatas_list,
)
```
**Proceed with SelfQueryRetriever:** Finally, proceed to use the SelfQueryRetriever.from_llm method as described in the documentation.
---------------------------------------------------------------------------------------------------------------------
### Note:
Everything works as intended with the similarity_search. The SelfQueryRetriever is returning as expected minus the relevant documents.
My suspicion is that the issue may be related to the Documents() class, but I recreated the object/class without any success regarding output. It should still function properly if the data is inserting fine into the database and all the other queries are working fine. What has lead me to this place early on is an issue arises with PDFs when they are split; thus, the workarounds are either:
**Appending into an Empty String:** This is necessary because metadata becomes distorted, and page break behavior takes precedence over separators and chunk size.
**Converting PDF to Image and Then to Text:** The process is PDF -> IMG -> Tesseract -> Text, which still requires metadata to be recreated.
### Expected behavior
Expected behavior:
Output the query and the data related to it, not just the query. | unexpected behavior: retriever.get_relevant_documents | https://api.github.com/repos/langchain-ai/langchain/issues/9744/comments | 2 | 2023-08-25T06:54:26Z | 2023-12-01T16:07:28Z | https://github.com/langchain-ai/langchain/issues/9744 | 1,866,443,166 | 9,744 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I've been using Langchain to connect with the MongoDB vector store. While the file upload functionality works seamlessly, I encounter an error when trying to use the similarity search feature. Here's the error message I receive:

### Suggestion:
_No response_ | Issue: Error in Similarity Search with MongoDB Vector Store in Langchain | https://api.github.com/repos/langchain-ai/langchain/issues/9735/comments | 3 | 2023-08-25T03:52:38Z | 2024-02-10T16:18:57Z | https://github.com/langchain-ai/langchain/issues/9735 | 1,866,229,858 | 9,735 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hi, I'm getting the following error with Langchain integration with AWS Sagemaker:
`ValueError: Error raised by inference endpoint: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (424) from primary with message "{
"code":424,
"message":"prediction failure",
"error":"Input payload must contain a 'inputs' key and optionally a 'parameters' key containing a dictionary of parameters."
}".`
I've tried adding a custom attribute to accept any relevant terms in order to run my model, but I'm still having issues. See below for my initialization of the model:
chain = load_qa_chain(
llm=SagemakerEndpoint(
endpoint_name="endpointname"
credentials_profile_name="profilename",
region_name="region",
model_kwargs={"temperature": 1e-10},
endpoint_kwargs={"CustomAttributes": "accept_eula=true"},
content_handler=content_handler,
),
prompt=PROMPT,
)
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
chain = load_qa_chain(
llm=SagemakerEndpoint(
endpoint_name="endpointname"
credentials_profile_name="profilename",
region_name="region",
model_kwargs={"temperature": 1e-10},
endpoint_kwargs={"CustomAttributes": "accept_eula=true"},
content_handler=content_handler,
),
prompt=PROMPT,
)
### Expected behavior
error described above | AWS Sagemaker | https://api.github.com/repos/langchain-ai/langchain/issues/9733/comments | 9 | 2023-08-25T02:57:42Z | 2023-12-01T16:07:32Z | https://github.com/langchain-ai/langchain/issues/9733 | 1,866,186,574 | 9,733 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: 0.0.271
Platform: Ubuntu 20.04
Device: Nvidia-T4
Python version: 3.9.17
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from typing import Any, Dict, List, Optional
from langchain.pydantic_v1 import Field, root_validator
from langchain.llms import VLLM
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.prompts import (
ChatPromptTemplate,
MessagesPlaceholder,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
from langchain.chains import LLMChain
from langchain.memory import ConversationBufferMemory
class MyVLLM(VLLM):
dtype: str = 'auto'
vllm_kwargs: Dict[str, Any] = Field(default_factory=dict)
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that python package exists in environment."""
try:
from vllm import LLM as VLLModel
except ImportError:
raise ImportError(
"Could not import vllm python package. "
"Please install it with `pip install vllm`."
)
values["client"] = VLLModel(
model=values["model"],
tensor_parallel_size=values["tensor_parallel_size"],
trust_remote_code=values["trust_remote_code"],
dtype=values["dtype"],
**values['vllm_kwargs']
)
return values
llm = MyVLLM(model="tiiuae/falcon-7b",
trust_remote_code=True, # mandatory for hf models
max_new_tokens=128,
top_k=10,
top_p=0.95,
temperature=0.8,
dtype='float16',
vllm_kwargs = {'gpu_memory_utilization': 0.98},
callbacks=[StreamingStdOutCallbackHandler()]
)
# Prompt
prompt = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"You are a nice chatbot having a conversation with a human."
),
# The `variable_name` here is what must align with memory
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{question}")
]
)
# Notice that we `return_messages=True` to fit into the MessagesPlaceholder
# Notice that `"chat_history"` aligns with the MessagesPlaceholder name
memory = ConversationBufferMemory(memory_key="chat_history",return_messages=True)
conversation = LLMChain(
llm=llm,
prompt=prompt,
verbose=True,
memory=memory
)
```
### Expected behavior
I am following the `Chatbots` example [here](https://python.langchain.com/docs/use_cases/chatbots).
It's not working as expected. The responses returned are weird that not just a single LLM response is there but also some human responses. What is happening there?
 | `Chatbots` use case example is not working | https://api.github.com/repos/langchain-ai/langchain/issues/9732/comments | 3 | 2023-08-25T00:48:21Z | 2023-12-02T16:05:42Z | https://github.com/langchain-ai/langchain/issues/9732 | 1,866,073,423 | 9,732 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain==0.0.271
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain import PromptTemplate
from langchain.agents import AgentType
from dotenv import load_dotenv
from langchain.chat_models import ChatOpenAI
from langchain.agents import initialize_agent
from api.utils.agent.tools import view_capabilities_tool
load_dotenv()
system_message = "You are required to use only the tools provided to answer a query.If no given tool can help,"\
" truthfully tell the user that you are unable to help them.Always end reply with see ya!."\
"Query: {query}"
prompt_template = PromptTemplate(
template=system_message,
input_variables=["query"],
)
capabilities = view_capabilities_tool.CapabilitiesTool()
llm = ChatOpenAI(temperature=0)
agent_chain = initialize_agent(
[capabilities],
llm,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
agent_kwargs={
"system_message": system_message
}
)
response = agent_chain.run(input="What can you do")
print(response)
capabilities_tool:
from typing import Type
from langchain.tools import BaseTool
from pydantic import BaseModel, BaseSettings
class CapabilitiesToolSchema(BaseModel):
pass
class CapabilitiesTool(BaseTool, BaseSettings):
name = "capabilities_tool"
description = """Tool that shows what you are capable of doing."""
args_schema: Type[CapabilitiesToolSchema] = CapabilitiesToolSchema
def _run(
self,
) -> str:
body = ("I can help you out with"
"\nAdding a site\nRemoving a site\nAdding an interest\nRemoving an interest\nViewing your details\n "
"Opt out")
return body
### Expected behavior
The model is meant to go through the whole reasoning process, select a tool and wait for the response for that tool called.
Instead the agent just stops at """Action_Input: ...""" every time. The model doesnt use any tool given, sometimes it would give this steps:
> Entering new AgentExecutor chain...
I can use the capabilities_tool to see what I am capable of doing. Let me check.
> Finished chain.
I can use the capabilities_tool to see what I am capable of doing. Let me check. | Langchain agent doesn't complete reasoning sequence stops halfway and can't use structured tools | https://api.github.com/repos/langchain-ai/langchain/issues/9728/comments | 3 | 2023-08-24T23:02:33Z | 2024-02-25T19:01:25Z | https://github.com/langchain-ai/langchain/issues/9728 | 1,866,002,041 | 9,728 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.