issue_owner_repo
listlengths 2
2
| issue_body
stringlengths 0
261k
⌀ | issue_title
stringlengths 1
925
| issue_comments_url
stringlengths 56
81
| issue_comments_count
int64 0
2.5k
| issue_created_at
stringlengths 20
20
| issue_updated_at
stringlengths 20
20
| issue_html_url
stringlengths 37
62
| issue_github_id
int64 387k
2.46B
| issue_number
int64 1
127k
|
---|---|---|---|---|---|---|---|---|---|
[
"hwchase17",
"langchain"
]
| ### System Info
langchain = 0.0.288
python = 3.8.0
### Who can help?
@hwchase17
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm sorry, I'm not very familiar with this field,
but I don't quite understand how the description of this function differs from its actual operation.
```
def similarity_search_with_score(
self, query: str, k: int = 4, **kwargs: Any
) -> List[Tuple[Document, float]]:
"""
Return list of documents most similar to the query
text and cosine distance in float for each.
Lower score represents more similarity.
"""
if self._embedding is None:
raise ValueError(
"_embedding cannot be None for similarity_search_with_score"
)
content: Dict[str, Any] = {"concepts": [query]}
if kwargs.get("search_distance"):
content["certainty"] = kwargs.get("search_distance")
query_obj = self._client.query.get(self._index_name, self._query_attrs)
if kwargs.get("where_filter"):
query_obj = query_obj.with_where(kwargs.get("where_filter"))
embedded_query = self._embedding.embed_query(query)
if not self._by_text:
vector = {"vector": embedded_query}
result = (
query_obj.with_near_vector(vector)
.with_limit(k)
.with_additional("vector")
.do()
)
else:
result = (
query_obj.with_near_text(content)
.with_limit(k)
.with_additional("vector")
.do()
)
if "errors" in result:
raise ValueError(f"Error during query: {result['errors']}")
docs_and_scores = []
for res in result["data"]["Get"][self._index_name]:
text = res.pop(self._text_key)
score = np.dot(res["_additional"]["vector"], embedded_query)
docs_and_scores.append((Document(page_content=text, metadata=res), score))
return docs_and_scores
```
### Expected behavior
`score = np.dot(res["_additional"]["vector"], embedded_query)`
As you can see, the description mentions that the `score` corresponds to `cosine distance`, but the `code` seems to only calculate the `dot product`. Am I missing something?
And Here is some definition from Weaviate:
[https://weaviate.io/blog/distance-metrics-in-vector-search#cosine-similarity](url)

Thanks for your kind help!
| Is score return from similarity_search_with_score in Weaviate is really cosine distance? | https://api.github.com/repos/langchain-ai/langchain/issues/10581/comments | 4 | 2023-09-14T14:14:17Z | 2023-12-25T16:08:05Z | https://github.com/langchain-ai/langchain/issues/10581 | 1,896,670,748 | 10,581 |
[
"hwchase17",
"langchain"
]
| ### System Info
'OS_NAME': 'DEBIAN_10'
Langchain version : '0.0.288'
python : 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This issue only occurs when region != "global". The retriever works well when region is set to "global"
Steps to reproduce :
1. Create a Enterprise search App with region = 'us'

3. Import langchain version 0.0.288
```
import langchain
from langchain.retrievers import GoogleCloudEnterpriseSearchRetriever
PROJECT_ID = "<PROJECT_ID>" # Set to your Project ID
SEARCH_ENGINE_ID = "<SEARCH_ENGINE_ID>"#"# Set to your data store ID
retriever = GoogleCloudEnterpriseSearchRetriever(
project_id=PROJECT_ID,
search_engine_id=SEARCH_ENGINE_ID,
max_documents=3 ,
location_id = "us"
)
retriever.get_relevant_documents("What is capital of India?")
```
4. Code errors out with below
```
---------------------------------------------------------------------------
_InactiveRpcError Traceback (most recent call last)
File /opt/conda/envs/python310/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:72, in _wrap_unary_errors.<locals>.error_remapped_callable(*args, **kwargs)
71 try:
---> 72 return callable_(*args, **kwargs)
73 except grpc.RpcError as exc:
File /opt/conda/envs/python310/lib/python3.10/site-packages/grpc/_channel.py:1161, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
1155 (
1156 state,
1157 call,
1158 ) = self._blocking(
1159 request, timeout, metadata, credentials, wait_for_ready, compression
1160 )
-> 1161 return _end_unary_response_blocking(state, call, False, None)
File /opt/conda/envs/python310/lib/python3.10/site-packages/grpc/_channel.py:1004, in _end_unary_response_blocking(state, call, with_call, deadline)
1003 else:
-> 1004 raise _InactiveRpcError(state)
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.NOT_FOUND
details = "DataStore projects/PROJECT_ID/locations/us/collections/default_collection/dataStores/SEARCH_ENGINE_ID not found."
debug_error_string = "UNKNOWN:Error received from peer ipv4:172.253.120.95:443 {created_time:"2023-09-14T12:55:00.327037809+00:00", grpc_status:5, grpc_message:"DataStore projects/PROJECT_ID/locations/us/collections/default_collection/dataStores/SEARCH_ENGINE_ID not found."}"
>
The above exception was the direct cause of the following exception:
NotFound Traceback (most recent call last)
Cell In[365], line 1
----> 1 retriever.get_relevant_documents("What is capital of India?")
File ~/.local/lib/python3.10/site-packages/langchain/schema/retriever.py:208, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, **kwargs)
206 except Exception as e:
207 run_manager.on_retriever_error(e)
--> 208 raise e
209 else:
210 run_manager.on_retriever_end(
211 result,
212 **kwargs,
213 )
File ~/.local/lib/python3.10/site-packages/langchain/schema/retriever.py:201, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, **kwargs)
199 _kwargs = kwargs if self._expects_other_args else {}
200 if self._new_arg_supported:
--> 201 result = self._get_relevant_documents(
202 query, run_manager=run_manager, **_kwargs
203 )
204 else:
205 result = self._get_relevant_documents(query, **_kwargs)
File ~/.local/lib/python3.10/site-packages/langchain/retrievers/google_cloud_enterprise_search.py:254, in GoogleCloudEnterpriseSearchRetriever._get_relevant_documents(self, query, run_manager)
251 search_request = self._create_search_request(query)
253 try:
--> 254 response = self._client.search(search_request)
255 except InvalidArgument as e:
256 raise type(e)(
257 e.message + " This might be due to engine_data_type not set correctly."
258 )
File /opt/conda/envs/python310/lib/python3.10/site-packages/google/cloud/discoveryengine_v1beta/services/search_service/client.py:577, in SearchServiceClient.search(self, request, retry, timeout, metadata)
570 metadata = tuple(metadata) + (
571 gapic_v1.routing_header.to_grpc_metadata(
572 (("serving_config", request.serving_config),)
573 ),
574 )
576 # Send the request.
--> 577 response = rpc(
578 request,
579 retry=retry,
580 timeout=timeout,
581 metadata=metadata,
582 )
584 # This method is paged; wrap the response in a pager, which provides
585 # an `__iter__` convenience method.
586 response = pagers.SearchPager(
587 method=rpc,
588 request=request,
589 response=response,
590 metadata=metadata,
591 )
File /opt/conda/envs/python310/lib/python3.10/site-packages/google/api_core/gapic_v1/method.py:113, in _GapicCallable.__call__(self, timeout, retry, *args, **kwargs)
110 metadata.extend(self._metadata)
111 kwargs["metadata"] = metadata
--> 113 return wrapped_func(*args, **kwargs)
File /opt/conda/envs/python310/lib/python3.10/site-packages/google/api_core/grpc_helpers.py:74, in _wrap_unary_errors.<locals>.error_remapped_callable(*args, **kwargs)
72 return callable_(*args, **kwargs)
73 except grpc.RpcError as exc:
---> 74 raise exceptions.from_grpc_error(exc) from exc
NotFound: 404 DataStore projects/PROJECT_ID/locations/us/collections/default_collection/dataStores/SEARCH_ENGINE_ID not found.
```
### Expected behavior
Code should return three relevant documents from Enterprise Search | GoogleCloudEnterpriseSearchRetriever fails to where location is "us" | https://api.github.com/repos/langchain-ai/langchain/issues/10580/comments | 3 | 2023-09-14T13:11:37Z | 2023-11-01T05:59:16Z | https://github.com/langchain-ai/langchain/issues/10580 | 1,896,548,466 | 10,580 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version 0.0.285
langsmith version 0.0.28
Python version 3.11.2
### Who can help?
@hwchase17
### Information
- [X ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I followed tutorial in RAG cookbook with "With Memory and returning source documents"
Steps for changing behaviour
1. use FAISS.from documents and save to files
embeddings = OpenAIEmbeddings()
vectorstore = FAISS.from_documents(documents, embeddings)
vectorstore.save_local("data/faiss_index")
2. load from file
embeddings = OpenAIEmbeddings()
vectorstore = FAISS.load_local("data/faiss_index", embeddings)
retriever = vectorstore.as_retriever()
3. error raised "Document' object has no attribute '_lc_kwargs" in step 5
final_inputs = {
"context": lambda x: _combine_documents(x["docs"]),
"question": itemgetter("question")
}
here is the screen shot when using langsmith
<img width="543" alt="RAG" src="https://github.com/langchain-ai/langchain/assets/105797032/038b1b98-4dde-47a1-a79b-4061014c05a2">
### Expected behavior
Expected behaviour is LLM give result without error | Document' object has no attribute '_lc_kwargs | https://api.github.com/repos/langchain-ai/langchain/issues/10579/comments | 5 | 2023-09-14T13:04:51Z | 2024-01-30T00:55:18Z | https://github.com/langchain-ai/langchain/issues/10579 | 1,896,536,091 | 10,579 |
[
"hwchase17",
"langchain"
]
| ### System Info
I have a question&answer over docs chatbot application, that uses the RetrievalQAWithSourcesChain and ChatPromptTemplate. In langchain version 0.0.238 it used to return sources but this seems to be broken in the releases since then.
Python version: Python 3.11.4
LangChain version: 0.0.287
Example response with missing sources:
> Entering new RetrievalQAWithSourcesChain chain...
> Finished chain.
{'question': 'what is sql injection', 'answer': 'SQL injection is a web security vulnerability that allows an attacker to interfere with the queries that an application makes to its database. By manipulating the input data, an attacker can execute their own malicious SQL queries, which can lead to unauthorized access, data theft, or modification of the database. This vulnerability can be exploited to view sensitive data, modify or delete data, or even take control of the database server. SQL injection is a serious issue that can result in high-profile data breaches and compromises of user accounts. It is important for developers to implement proper input validation and parameterized queries to prevent SQL injection attacks.\n\n', 'sources': ''}
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
import pickle
import gradio as gr
from langchain.vectorstores import FAISS
from langchain.chains import RetrievalQAWithSourcesChain
from langchain.chat_models import PromptLayerChatOpenAI
from langchain.prompts.chat import (
ChatPromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
)
pkl_file_path = "faiss_store.pkl"
event = {"question": "what is sql injection"}
system_template = """
Use the provided articles delimited by triple quotes to answer questions. If the answer cannot be found in the articles, write "I could not find an answer."
If you don't know the answer, just say "Hmm..., I'm not sure.", don't try to make up an answer.
ALWAYS return a "SOURCES" part in your answer.
The "SOURCES" part should be a reference to the source of the document from which you got your answer.
Example of your response should be:
The answer is foo
SOURCES:
1. abc
2. xyz
Begin!
----------------
{summaries}
"""
def get_chain(store: FAISS, prompt_template: ChatPromptTemplate):
return RetrievalQAWithSourcesChain.from_chain_type(
PromptLayerChatOpenAI(
pl_tags=["burpbot"],
temperature=0,
),
chain_type="stuff",
retriever=store.as_retriever(),
chain_type_kwargs={"prompt": prompt_template},
reduce_k_below_max_tokens=True,
verbose=True,
)
def create_prompt_template() -> ChatPromptTemplate:
return ChatPromptTemplate.from_messages(
[
SystemMessagePromptTemplate.from_template(system_template),
HumanMessagePromptTemplate.from_template("{question}"),
]
)
def load_remote_faiss_store() -> FAISS:
with open(pkl_file_path, "rb") as f:
return pickle.load(f)
def main() -> dict:
prompt_template = create_prompt_template()
store: FAISS = load_remote_faiss_store()
chain = get_chain(store, prompt_template)
result = chain(event)
print(result)
```
### Expected behavior
expected output:
>{'question': 'what is sql injection', 'answer': 'SQL injection is a web security vulnerability that allows an attacker to interfere with the queries that an application makes to its database. By manipulating the input data, an attacker can execute their own malicious SQL queries, which can lead to unauthorized access, data theft, or modification of the database. This vulnerability can be exploited to view sensitive data, modify or delete data, or even take control of the database server. SQL injection is a serious issue that can result in high-profile data breaches and compromises of user accounts. It is important for developers to implement proper input validation and parameterized queries to prevent SQL injection attacks.\n\n', 'sources': 'https://example.net/web-security/sql-injection'}
| The RetrievalQAWithSourcesChain doesn't return SOURCES. | https://api.github.com/repos/langchain-ai/langchain/issues/10575/comments | 5 | 2023-09-14T10:01:45Z | 2024-02-17T16:07:23Z | https://github.com/langchain-ai/langchain/issues/10575 | 1,896,207,622 | 10,575 |
[
"hwchase17",
"langchain"
]
| ### System Info
def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
if f"{self.ai_prefix}:" in text:
return AgentFinish(
{"output": text.split(f"{self.ai_prefix}:")[-1].strip()}, text
)
regex = r"Action: (.*?)[\n]*Action Input: (.*)"
match = re.search(regex, text)
if not match:
raise OutputParserException(f"Could not parse LLM output: `{text}`")
action = match.group(1)
action_input = match.group(2)
return AgentAction(action.strip(), action_input.strip(" ").strip('"'), text)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. init an agent
2. ask the agent a simple question which it can solve without using any tools
### Expected behavior
DONT RAISE ERROR | agent got "No I need to use a tool? No" response from llmm,which CANNOT be parsed | https://api.github.com/repos/langchain-ai/langchain/issues/10572/comments | 2 | 2023-09-14T05:33:02Z | 2023-12-15T05:47:20Z | https://github.com/langchain-ai/langchain/issues/10572 | 1,895,728,355 | 10,572 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
https://huggingface.co/spaces/JavaFXpert/Chat-GPT-LangChain
<img width="399" alt="WX20230914-113935@2x" src="https://github.com/langchain-ai/langchain/assets/34183928/1d61724a-152f-4ad2-8197-0dfc0fd44f98">
### Idea or request for content:
_No response_ | Link in the Readme is invalid. | https://api.github.com/repos/langchain-ai/langchain/issues/10569/comments | 3 | 2023-09-14T03:40:51Z | 2023-12-27T16:05:23Z | https://github.com/langchain-ai/langchain/issues/10569 | 1,895,600,673 | 10,569 |
[
"hwchase17",
"langchain"
]
| ### System Info
Unresolved reference 'QianfanLLMEndpoint'
Name: langchain
Version: 0.0.288
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.llms.baidu_qianfan_endpoint import QianfanLLMEndpoint
### Expected behavior
I hope to use Qianfan model but I can't import it, even though i have update my langchain, it can't work. | Can not use QianfanLLMEndpoint | https://api.github.com/repos/langchain-ai/langchain/issues/10567/comments | 6 | 2023-09-14T02:32:37Z | 2023-12-26T16:05:47Z | https://github.com/langchain-ai/langchain/issues/10567 | 1,895,548,600 | 10,567 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Attempting to make a google calendar agent, however it keeps making a field that should be a datetime object a string object.
the prompt:
```prefix = """Date format: datetime(2023, 5, 2, 10, 0, 0)
Based on this event description: "Joey birthday tomorrow at 7 pm",
output a json of the following parameters:
Today's datetime on UTC time datetime(2023, 5, 2, 10, 0, 0), it's Tuesday and timezone
of the user is -5, take into account the timezone of the user and today's date.
1. summary
2. start
3. end
4. location
5. description
6. user_timezone
event_summary:
{{
"summary": "Joey birthday",
"start": "datetime(2023, 5, 3, 19, 0, 0)",
"end": "datetime(2023, 5, 3, 20, 0, 0)",
"location": "",
"description": "",
"user_timezone": "America/New_York"
}}
Date format: datetime(YYYY, MM, DD, hh, mm, ss)
Based on this event description: "Create a meeting for 5 pm on Saturday with Joey",
output a json of the following parameters:
Today's datetime on UTC time datetime(2023, 5, 4, 10, 0, 0), it's Thursday and timezone
of the user is -5, take into account the timezone of the user and today's date.
1. summary
2. start
3. end
4. location
5. description
6. user_timezone
event_summary:
{{
"summary": "Meeting with Joey",
"start": "datetime(2023, 5, 6, 17, 0, 0)",
"end": "datetime(2023, 5, 6, 18, 0, 0)",
"location": "",
"description": "",
"user_timezone": "America/New_York"
}}
"""```
the tool
```
class CalnederEventTool(BaseTool):
"""A tool used to create events on google calendar."""
name = "custom_search"
description = "a tool used to create events on google calendar"
def _run(
self,
summary: str,
start: datetime,
end: Union[datetime, None],
recurrence: Optional[str] = None, # Changed from Optional[Recurrence] to Optional[str]
run_manager: Optional['CallbackManagerForToolRun'] = None,
) -> str:
GOOGLE_EMAIL = environ.get('GOOGLE_EMAIL')
CREDENTIALS_PATH = environ.get('CREDENTIALS_PATH')
calendar = GoogleCalendar(
GOOGLE_EMAIL,
credentials_path=CREDENTIALS_PATH
)
event = Event(summary=summary, start=start, end=end)
calendar.add_event(event)
async def _arun(
self, query: str, run_manager: Optional[AsyncCallbackManagerForToolRun] = None
) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("custom_search does not support async")
```
```
> Entering new AgentExecutor chain...
Action:
```
{
"action": "custom_search",
"action_input": {
"summary": "Going to the bar",
"start": "2020-09-01T17:00:00",
"end": "2020-09-01T18:00:00",
"recurrence": ""
}
}
```
### Suggestion:
_No response_ | Issue: Agent keeps using the wrong type | https://api.github.com/repos/langchain-ai/langchain/issues/10566/comments | 10 | 2023-09-14T02:22:18Z | 2023-09-27T20:09:53Z | https://github.com/langchain-ai/langchain/issues/10566 | 1,895,540,285 | 10,566 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
chain = MultiRetrievalQAChain.from_retrievers(OpenAI(), retriever_infos, verbose=True)
I'm receiving this error when I try to call the above:(I'm following this doc https://python.langchain.com/docs/use_cases/question_answering/how_to/multi_retrieval_qa_router)
```
ValidationError Traceback (most recent call last)
Cell In[7], line 1
----> 1 chain = MultiRetrievalQAChain.from_retrievers(OpenAI(), retriever_infos)
File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/router/multi_retrieval_qa.py:66, in MultiRetrievalQAChain.from_retrievers(cls, llm, retriever_infos, default_retriever, default_prompt, default_chain, **kwargs)
64 prompt = r_info.get("prompt")
65 retriever = r_info["retriever"]
---> 66 chain = RetrievalQA.from_llm(llm, prompt=prompt, retriever=retriever)
67 name = r_info["name"]
68 destination_chains[name] = chain
File ~/anaconda3/lib/python3.11/site-packages/langchain/chains/retrieval_qa/base.py:84, in BaseRetrievalQA.from_llm(cls, llm, prompt, callbacks, **kwargs)
74 document_prompt = PromptTemplate(
75 input_variables=["page_content"], template="Context:\n{page_content}"
76 )
77 combine_documents_chain = StuffDocumentsChain(
78 llm_chain=llm_chain,
79 document_variable_name="context",
80 document_prompt=document_prompt,
81 callbacks=callbacks,
82 )
---> 84 return cls(
85 combine_documents_chain=combine_documents_chain,
86 callbacks=callbacks,
87 **kwargs,
88 )
File ~/anaconda3/lib/python3.11/site-packages/langchain/load/serializable.py:75, in Serializable.__init__(self, **kwargs)
74 def __init__(self, **kwargs: Any) -> None:
---> 75 super().__init__(**kwargs)
76 self._lc_kwargs = kwargs
File ~/anaconda3/lib/python3.11/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for RetrievalQA
retriever
Can't instantiate abstract class BaseRetriever with abstract method _get_relevant_documents (type=type_error)
```
### Suggestion:
_No response_ | Issue: Dynamically select from multiple retrievers | https://api.github.com/repos/langchain-ai/langchain/issues/10561/comments | 1 | 2023-09-13T21:34:13Z | 2023-09-13T22:29:56Z | https://github.com/langchain-ai/langchain/issues/10561 | 1,895,297,545 | 10,561 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Update RetrievalQA chain so that custom prompts can accept parameters other than input_documents and question.
Current functionality is limited by call to StuffDocumentsChain:
answer = self.combine_documents_chain.run(
input_documents=docs, question=question, callbacks=_run_manager.get_child()
)
Any additional parameters required aren't passed, including chat history.
2 line code update required:
inputs['input_documents'] = docs
answer = self.combine_documents_chain.run(
inputs, callbacks=_run_manager.get_child()
)
### Motivation
Improve flexibility of the RetrievalQA chain enabling system message to be customised, chat history to be passed so GPT can reference back to previous answers. Customise language in answer in QA chain etc.
### Your contribution
Will submit PR with above change in-line with contributing guidelines. | RetrievalQA custom prompt to accept prompts other than context and question e.g. language for use in Sequential Chain | https://api.github.com/repos/langchain-ai/langchain/issues/10557/comments | 3 | 2023-09-13T18:59:17Z | 2024-02-11T16:14:37Z | https://github.com/langchain-ai/langchain/issues/10557 | 1,895,083,788 | 10,557 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain 0.0.281
Platform: Centos
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Hi,
I have two vector stores:
splitter = RecursiveCharacterTextSplitter(chunk_size=400, chunk_overlap=50)
splits_1 = splitter.split_documents(docs_1)
splits_2 = splitter.split_documents(docs_2)
store1 = Chroma.from_documents(documents=splits_1, embedding=HuggingFaceEmbeddings())
store2 = Chroma.from_documents(documents=splits_2, embedding=HuggingFaceEmbeddings())
Then I use store2 to do similarity search, it returns results from splits_1, that's very wired. Can someone please help?
Thanks
Tom
### Expected behavior
Different vector store should use its own pool to do the similarity search | LangChain's Chroma similarity_search return results from other db | https://api.github.com/repos/langchain-ai/langchain/issues/10555/comments | 7 | 2023-09-13T17:42:19Z | 2024-05-17T16:06:33Z | https://github.com/langchain-ai/langchain/issues/10555 | 1,894,977,351 | 10,555 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
import pandas as pd
import pandas_gpt
df = pd.read_csv('aisc-shapes-database-v16.0.csv', index_col=0, header=0, usecols = ["A:F"], names = [
"Type", "EDI_Std_Nomenclature", "AISC_Manual_Label", "T_F", "W", "Area"])
df.ask('what is the area of W12X12?')
Need help getting this file to read with pandas.read_csv
[aisc-shapes-database-v16.0.csv](https://github.com/langchain-ai/langchain/files/12600284/aisc-shapes-database-v16.0.csv)
### Suggestion:
This is my error message
File "c:\Users\camer\import pandas as pd.py", line 4, in <module>
df = pd.read_csv('aisc-shapes-database-v16.0.csv', index_col=0, header=0, usecols = ["A:F"], names = [
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\camer\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\io\parsers\readers.py", line 948, in read_csv
return _read(filepath_or_buffer, kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\camer\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\io\parsers\readers.py", line 611, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\camer\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\io\parsers\readers.py", line 1448, in __init__
self._engine = self._make_engine(f, self.engine)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\camer\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\io\parsers\readers.py", line 1723, in _make_engine
return mapping[engine](f, **self.options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\camer\AppData\Local\Programs\Python\Python311\Lib\site-packages\pandas\io\parsers\c_parser_wrapper.py", line 93, in __init__
self._reader = parsers.TextReader(src, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "parsers.pyx", line 579, in pandas._libs.parsers.TextReader.__cinit__
File "parsers.pyx", line 668, in pandas._libs.parsers.TextReader._get_header
File "parsers.pyx", line 879, in pandas._libs.parsers.TextReader._tokenize_rows
File "parsers.pyx", line 890, in pandas._libs.parsers.TextReader._check_tokenize_status
File "parsers.pyx", line 2050, in pandas._libs.parsers.raise_parser_error
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x96 in position 703: invalid start byte | Issue: Parsing issue | https://api.github.com/repos/langchain-ai/langchain/issues/10554/comments | 2 | 2023-09-13T17:23:30Z | 2023-12-20T16:05:06Z | https://github.com/langchain-ai/langchain/issues/10554 | 1,894,953,189 | 10,554 |
[
"hwchase17",
"langchain"
]
| I've been searching for a large-context LLM with a relatively low parameter count suitable for local execution on multiple T4 GPUs or a single A100. My primary goal is to summarize extensive financial reports. While I came across FinGPT v1, it seems it isn't hosted on HuggingFace.
However, I did find chatglm-6b, which serves as the foundation for FinGPT v1. This model is accessible on HuggingFace, but I'm facing issues loading it.
Here's a snippet that successfully loads and uses the model outside Langchain:
```
from transformers import AutoModel, AutoTokenizer
model_name = "THUDM/chatglm2-6b"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
# model = AutoModel.from_pretrained(model_name, trust_remote_code=True).cuda()
# 按需修改,目前只支持 4/8 bit 量化
# model = AutoModel.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True).quantize(4).cuda()
import torch
has_cuda = torch.cuda.is_available()
# has_cuda = False # force cpu
if has_cuda:
#model = AutoModel.from_pretrained("THUDM/chatglm2-6b-int4",trust_remote_code=True).half().cuda() # 3.92
model = AutoModel.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True).quantize(4).cuda()
else:
model = AutoModel.from_pretrained("THUDM/chatglm2-6b-int4",trust_remote_code=True).half() # float()
response, history = model.chat(tokenizer, f"Summarize this in a few words: {a}", history=[])
```
But, when I try the following, to use in Langchain:
```
from langchain.llms import HuggingFacePipeline
llm = HuggingFacePipeline.from_model_id(
model_id="THUDM/chatglm-6b",
task="text-generation",
model_kwargs={"temperature": 0, "max_length": 64},
)
```
I encounter this error:
```
ValueError: Tokenizer class ChatGLMTokenizer does not exist or is not currently imported.
```
From what I gather, the ChatGLM model cannot be passed directly to HuggingFace's pipeline. While the Langchain documentation does mention using ChatGLM as a local model, it seems to primarily focus on using it via an API endpoint:
```
endpoint_url = "http://127.0.0.1:8000"
# direct access endpoint in a proxied environment
# os.environ['NO_PROXY'] = '127.0.0.1'
llm = ChatGLM(
endpoint_url=endpoint_url,
max_token=80000,
history=[["我将从美国到中国来旅游,出行前希望了解中国的城市", "欢迎问我任何问题。"]],
top_p=0.9,
model_kwargs={"sample_model_args": False},
)
```
Would anyone have insights on how to correctly load ChatGLM for tasks within Langchain?
### Suggestion:
_No response_ | Issue: Can I load THUDM/chatglm-6b? | https://api.github.com/repos/langchain-ai/langchain/issues/10553/comments | 5 | 2023-09-13T16:32:12Z | 2024-02-17T16:07:28Z | https://github.com/langchain-ai/langchain/issues/10553 | 1,894,880,317 | 10,553 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
The `index` [API Reference document](https://api.python.langchain.com/en/latest/indexes/langchain.indexes._api.index.html) that is linked in the [Indexing documentation](https://python.langchain.com/docs/modules/data_connection/indexing#quickstart) returns a 404 error.
### Idea or request for content:
Please include detailed documentation for `index` to use correctly `SQLRecordManager`. | DOC: inexistent documentation for index | https://api.github.com/repos/langchain-ai/langchain/issues/10552/comments | 2 | 2023-09-13T16:04:16Z | 2023-12-20T16:05:11Z | https://github.com/langchain-ai/langchain/issues/10552 | 1,894,837,821 | 10,552 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain_version: "0.0.287"
library: "langchain"
library_version: "0.0.287"
platform: "Linux-6.1.0-12-amd64-x86_64-with-glibc2.36"
py_implementation: "CPython"
runtime: "python"
runtime_version: "3.11.2"
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
After following the [indexing instructions](https://python.langchain.com/docs/modules/data_connection/indexing), `index` stores the documents in a Redis vectorstore, but it does so outside the vectorstore's index.
```
import os, time, json, openai
from langchain.vectorstores.redis import Redis
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.indexes import SQLRecordManager, index
from langchain.schema import Document
from datetime import datetime
from pathlib import Path
openai.api_key = os.environ['OPENAI_API_KEY']
VECTORS_INDEX_NAME = 'Vectors'
COLLECTION_NAME = 'DocsDB'
NAMESPACE = f"Redis/{COLLECTION_NAME}"
REDIS_URL = "redis://10.0.1.21:6379"
embeddings = OpenAIEmbeddings()
record_manager = SQLRecordManager(NAMESPACE, db_url="sqlite:///cache_Redis.sql")
record_manager.create_schema()
rds_vectorstore = Redis.from_existing_index(
embeddings,
index_name=VECTORS_INDEX_NAME,
redis_url=REDIS_URL,
schema='Redis_schema.yaml'
)
index(
document,
record_manager,
rds_vectorstore,
cleanup = "full", # None: for first document load; "incremental": for following documents
source_id_key = "title",
)
```
When exploring the Redis vectorstore, all `documents` loaded outside the specified `VECTORS_INDEX_NAME`.
When `documents` are loaded to the vectorstore without `RecordManager` `index`, they are created inside the specified `VECTORS_INDEX_NAME` when using the following code:
```
rds = Redis.from_documents(
document,
embeddings,
index_name=VECTORS_INDEX_NAME,
redis_url=REDIS_URL,
index_schema='Redis_schema.yaml'
)
```
### Expected behavior
`Documents` loaded into a Redis vectorstore using `index` `RecordManager` should be created inside the vectorstore's index. | SQLRecordManager index adds documents outside existing Redis vectorstore index | https://api.github.com/repos/langchain-ai/langchain/issues/10551/comments | 3 | 2023-09-13T15:58:42Z | 2024-01-30T00:46:01Z | https://github.com/langchain-ai/langchain/issues/10551 | 1,894,829,054 | 10,551 |
[
"hwchase17",
"langchain"
]
| ### Feature request
MMR search_type is not implemented for Google Vertex AI Matching Engine Vector Store (new name of Matching Engine- Vector Search).
I am getting the error `NotImplementedError`
Below is the code that I had used
`retriever = me.as_retriever(
search_type="mmr",
search_kwargs={
"k": 10,
"search_distance": 0.6,
"fetch_k": 15,
"lambda_mult": 0.7 }}`
Please implement it at the earliest, request the team if they can provide ETA too
### Motivation
I am working for a client where they are using only Google Vertex AI components for creating LLM chatbot agents using various unstructured document types. We are not getting optimal results with the default `search_type="similarity"` , we understand that results can improve a lot with MMR search. Hence kindly requesting the team to add the `search_type="mmr"` feature
### Your contribution
Can provide feedback on the new feature performance | MMR search_type not implemented for Google Vertex AI Matching Engine Vector Store (new name of Matching Engine- Vector Search) | https://api.github.com/repos/langchain-ai/langchain/issues/10550/comments | 1 | 2023-09-13T15:58:41Z | 2024-03-16T16:04:41Z | https://github.com/langchain-ai/langchain/issues/10550 | 1,894,829,007 | 10,550 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
# The code for my model for sentiment analysis (this works, the problem is in the next part of my code)
from datasets import load_dataset,Dataset
from sentence_transformers.losses import CosineSimilarityLoss
from setfit import SetFitModel, SetFitTrainer, sample_dataset
from transformers import pipeline
import pandas as pd
import langchain
# df = pd.read_csv("C:/Users/sanja/OneDrive/Desktop/Trillo InternShip/train.csv",encoding='ISO-8859-1')
df = pd.read_csv("C:/Users/sanja/OneDrive/Desktop/Trillo InternShip/train-utf-8.csv")
# Create a mapping from string labels to integer labels
label_mapping = {"negative": 0, "neutral": 1, "positive": 2} # Customize this mapping as needed
# Apply the mapping to the "sentiment" column
df['label'] = df['label'].map(label_mapping)
# Specify the columns for text (input) and label (output)
text_column = "selected_text"
label_column = "label"
# Assuming you have already preprocessed and tokenized your text data
dataset = Dataset.from_pandas(df)
num_samples_per_class = 8
# Simulate the few-shot regime by sampling 8 examples per class
train_dataset = sample_dataset(dataset, label_column=label_column, num_samples=num_samples_per_class)
eval_dataset = dataset # Assuming you want to evaluate on the same DataFrame
# Load a SetFit model from Hub
model = SetFitModel.from_pretrained("sentence-transformers/paraphrase-mpnet-base-v2")
# Create trainer
trainer1 = SetFitTrainer(
model=model,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
loss_class=CosineSimilarityLoss,
metric="accuracy",
batch_size=16,
num_iterations=20, # The number of text pairs to generate for contrastive learning
num_epochs=1, # The number of epochs to use for contrastive learning
column_mapping={text_column: "text", label_column: "label"} # Map dataset columns to text/label expected by trainer
)
# Train and evaluate
trainer1.train()
metrics = trainer1.evaluate()
# Pushing model to hub
trainer1.push_to_hub("Sanjay1234/Trillo-Project")
# But here I get a problem when I do transformation,
from langchain.chains import TransformChain, LLMChain, SimpleSequentialChain
from sentence_transformers.losses import CosineSimilarityLoss
from setfit import SetFitModel, SetFitTrainer, sample_dataset
from transformers import pipeline
def transform_func(text):
shortened_text = "\n\n".join(text.split("\n\n")[:3])
return shortened_text
transform_chain = TransformChain(
input_variables=["text"], output_variables=["output_text"], transform=transform_func
)
# I get a problem here
llm_chain = LLMChain(
llm={"llm": "Sanjay1234/Trillo-Project"}, # Provide the llm parameter as a dictionary
prompt={"prompt": "Summarize this text:"}
)
sequential_chain = SimpleSequentialChain(chains=[transform_chain, llm_chain])
text = "This is a long text. I want to transform it to only the first 3 paragraphs."
transformed_text = sequential_chain.run(text)
print(transformed_text)
I get the following error.-
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[26], line 1
----> 1 llm_chain = LLMChain(
2 llm={"llm": "Sanjay1234/Trillo-Project"}, # Provide the llm parameter as a dictionary
3 prompt={"prompt": "Summarize this text:"}
4 )
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain\load\serializable.py:74, in Serializable.__init__(self, **kwargs)
73 def __init__(self, **kwargs: Any) -> None:
---> 74 super().__init__(**kwargs)
75 self._lc_kwargs = kwargs
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\pydantic\main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 2 validation errors for LLMChain
prompt
Can't instantiate abstract class BasePromptTemplate with abstract methods format, format_prompt (type=type_error)
llm
Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, apredict, apredict_messages, generate_prompt, invoke, predict, predict_messages (type=type_error)
### Suggestion:
_No response_ | Issue: Not sure whether my transformation using the model I created was correct, as I am getting an error. | https://api.github.com/repos/langchain-ai/langchain/issues/10549/comments | 5 | 2023-09-13T15:50:31Z | 2023-12-20T16:05:16Z | https://github.com/langchain-ai/langchain/issues/10549 | 1,894,815,399 | 10,549 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
imports
import langchain
import os
from apikey import apikey
import openai
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain import OpenAI
from langchain.document_loaders import UnstructuredFileLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains import RetrievalQA
import streamlit as st
from langchain.document_loaders import TextLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.llms import OpenAI
from langchain.text_splitter import CharacterTextSplitter
import nltk
nltk.download("punkt")
#loading file
loader = UnstructuredFileLoader("aisc-shapes-database-v16.0.csv","aisc-shapes-database-v16.0_a1085.pdf")
documents = loader.load()
len(documents)
text_splitter = CharacterTextSplitter(chunk_size =1000000000, chunk_overlap = 0)
text = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()#(openai_api_key = os.environ['OPENAI_API_KEY'])
doc_search = Chroma.from_documents(text, embeddings)
chain = RetrievalQA.from_chain_type(llm =OpenAI(), chain_type="stuff", retriever=doc_search.as_retriever(search_kwargs={"k":1}))
query = "What is the area of wide flange W44X408"
result = chain.run(query)
print(result)
model.save('CIVEN-GPT')
### Suggestion:
It runs without the csv. file so im assuming it is that however I would like to be able to include the data in the file in the training. | Issue: Want to get this to run, im suspecting that the csv. file is causing the problem | https://api.github.com/repos/langchain-ai/langchain/issues/10544/comments | 2 | 2023-09-13T14:15:08Z | 2023-12-20T16:05:20Z | https://github.com/langchain-ai/langchain/issues/10544 | 1,894,630,727 | 10,544 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I want to intercept the input prompt and the output of a chain, so I added a custom callback to the chain (derived from _BaseCallbackHandler_), but the input prompt seems quite tricky to retrieve.
The _on_chain_start_ method has the information hidden in the "serialized" variable, but accessing it is quite cumbersome. I let you judge by yourself:
```
def on_chain_start(self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) -> Any:
"""Run when chain starts running."""
print(serialized["kwargs"]["prompt"]["kwargs"]["messages"][0]["kwargs"]["prompt"]["kwargs"]["template"])
```
Note that the format of _serialized_ changes from time to time for a reason I ignore, and it doesn't seem to be documented. This makes it unusable. Moreover, the "template" value is not the final prompt passed to the LLM after replacement of variables.
As for the _on_text_ method, it contains a formatted and colored text:
> Prompt after formatting:
> Human: prompt in green
Are there simpler ways to retrieve the input prompt from a callback handler?
### Motivation
Showing both input and output could help debugging and it may be desirable to customize the outputs given by the _verbose_ mode.
### Your contribution
Maybe simply add the input message in the parameters of the _on_chain_start_ method, regardless of the way it has been generated. | Get input prompt in a callback handler | https://api.github.com/repos/langchain-ai/langchain/issues/10542/comments | 3 | 2023-09-13T13:42:49Z | 2024-05-07T16:04:58Z | https://github.com/langchain-ai/langchain/issues/10542 | 1,894,566,864 | 10,542 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
While trying to load a GPTQ model through a HuggingFace Pipeline and then run an agent on it, the inference time is really slow.
```
# Load configuration from the model to avoid warnings
generation_config = GenerationConfig.from_pretrained(model_name_or_path)
# Create a pipeline for text generation
pipe = pipeline(
task="text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=1024,
do_sample=True,
repetition_penalty=1.15,
generation_config=generation_config,
use_cache=False
)
local_llm = HuggingFacePipeline(pipeline=pipe)
logging.info("Local LLM Loaded")
```
The model is getting loaded on GPU

However the inference is really slow. I am waiting around 10 minutes for one iteration to complete.
`agent = create_csv_agent(
local_llm,
"titanic.csv",
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True
)`
`agent.run("What is the total number of rows in titanic.csv")
`
Also, I get an error message -` Observation: len() is not a valid tool, try one of [python_repl_ast].` How to enable all tools so that the agent can use them?
### Suggestion:
No suggestion, require help. | Issue: Agents using GPTQ models from huggingface is really slow. | https://api.github.com/repos/langchain-ai/langchain/issues/10541/comments | 2 | 2023-09-13T13:26:28Z | 2023-12-20T16:05:26Z | https://github.com/langchain-ai/langchain/issues/10541 | 1,894,532,433 | 10,541 |
[
"hwchase17",
"langchain"
]
| ### Feature request
New features to support Baudu's Qianfan
### Motivation
I believe that Baidu's recently launched LLM platform, Qianfan, which offers a range of APIs, will soon become widely adopted. It would be beneficial to consider incorporating features that facilitate seamless integration between Langchain and Qianfan, making it easier for developers to build applications.
### Your contribution
https://github.com/langchain-ai/langchain/pull/10496 | Will langchain be able to support Baidu Qianfan in the future? | https://api.github.com/repos/langchain-ai/langchain/issues/10539/comments | 2 | 2023-09-13T12:55:51Z | 2023-09-28T01:19:12Z | https://github.com/langchain-ai/langchain/issues/10539 | 1,894,471,329 | 10,539 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain 0.0.287
In output_parsers there is a `SimpleJsonOutputParser` defined (json.py). This looks very reasonable for easily getting answers back in structured a format. However, the class does not work as it does not specify the method `get_format_instructions`and thus calling the parse method raises a `NotImplementedError`. In addition, there is no documentation and the class is not imported into the `__init__.py` of the directory.
Is this intended behavior ? I am ok to submit a small patch - for my case the class comes very handy and has less complexity than the approach via Pydantic.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce
1. from langchain.output_parsers.json import SimpleJsonOutputParser
2. output_parser = SimpleJsonOutputParser()
3. format_instructions = output_parser.get_format_instructions()
### Expected behavior
SimpleJsonOutputParser works like any other output parser. | SimpleJsonOutputParser not working | https://api.github.com/repos/langchain-ai/langchain/issues/10538/comments | 2 | 2023-09-13T12:50:53Z | 2023-12-20T16:05:31Z | https://github.com/langchain-ai/langchain/issues/10538 | 1,894,462,413 | 10,538 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Is there an agent toolkit for google calendar?
### Suggestion:
_No response_ | Issue: google calendar agent | https://api.github.com/repos/langchain-ai/langchain/issues/10536/comments | 1 | 2023-09-13T11:46:22Z | 2023-09-14T02:37:27Z | https://github.com/langchain-ai/langchain/issues/10536 | 1,894,354,179 | 10,536 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
1. I have downloaded original LangSmith walkthrough notebook and modified it to run AzureOpenAI llm instead of OpenAI
2. After successful run of the first example I went to Langsmith, selected first LLM call and opened it in the Playground.
3. I have filled up OpenAI key and hit 'Start'
Here is the error I get:
Error: Invalid namespace: $ -> {"id":["langchain","chat_models","azure_openai","AzureChatOpenAI"],"lc":1,"type":"constructor","kwargs":{"temperature":0,"openai_api_key":{"id":["xxx"],"lc":1,"type":"secret"},"deployment_name":"chat-gpt","openai_api_base":"yyy","openai_api_type":"azure","openai_api_version":"2023-03-15-preview"}}
I have played with different ways of setting OPEN_API_KEY but none of them works, the same error is consistently displayed.
So it is a bug or Azure Open AI does not work by design in the Playground?
### Suggestion:
_No response_ | Issue: Is LangSmith playground compatible with Azure OpenAI? | https://api.github.com/repos/langchain-ai/langchain/issues/10533/comments | 14 | 2023-09-13T10:48:45Z | 2024-02-07T17:12:48Z | https://github.com/langchain-ai/langchain/issues/10533 | 1,894,264,876 | 10,533 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain deplopment on sagemaker
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [x] Embedding Models
- [x] Prompts / Prompt Templates / Prompt Selectors
- [x] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain import SagemakerEndpoint
from langchain.llms.sagemaker_endpoint import LLMContentHandler
from typing import Dict
import json
class HFContentHandler(LLMContentHandler):
content_type = "application/json"
accepts = "application/json"
def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:
input_dict = {
"input": {
"question": prompt,
"context": model_kwargs
}
}
input_str = json.dumps(input_dict)
return input_str.encode('utf-8')
def transform_output(self, output: bytes) -> str:
response_json = output.read().decode('utf-8')
res = json.loads(response_json)
# Stripping away the input prompt from the returned response
ans = res[0]['generated_text'][self.len_prompt:]
ans = ans[:ans.rfind("Human")].strip()
return ans
# Example parameters
parameters = {
'do_sample': True,
'top_p': 0.3,
'max_new_tokens': 1024,
'temperature': 0.6,
'watermark': True
}
llm = SagemakerEndpoint(
endpoint_name="huggingface-pytorch-inference-**********",
region_name="us-east-1",
model_kwargs=parameters,
content_handler=HFContentHandler(),
)
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationChain
memory = ConversationBufferMemory()
# Creating a chain with buffer memory to keep track of conversations
chain = ConversationChain(llm=llm, memory=memory)
chain.predict({"input": {"question": "this is test", "context": "this is answer"}})
### Expected behavior
there is error in content handler please help to correct it.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[87], line 8
5 # Creating a chain with buffer memory to keep track of conversations
6 chain = ConversationChain(llm=llm, memory=memory)
----> 8 chain.predict({"input": {"question": "this is test", "context": "this is answer"}})
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/chains/llm.py:255, in LLMChain.predict(self, callbacks, **kwargs)
240 def predict(self, callbacks: Callbacks = None, **kwargs: Any) -> str:
241 """Format prompt with kwargs and pass to LLM.
242
243 Args:
(...)
253 completion = llm.predict(adjective="funny")
254 """
--> 255 return self(kwargs, callbacks=callbacks)[self.output_key]
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/chains/base.py:268, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
232 def __call__(
233 self,
234 inputs: Union[Dict[str, Any], Any],
(...)
241 include_run_info: bool = False,
242 ) -> Dict[str, Any]:
243 """Execute the chain.
244
245 Args:
(...)
266 `Chain.output_keys`.
267 """
--> 268 inputs = self.prep_inputs(inputs)
269 callback_manager = CallbackManager.configure(
270 callbacks,
271 self.callbacks,
(...)
276 self.metadata,
277 )
278 new_arg_supported = inspect.signature(self._call).parameters.get("run_manager")
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/chains/base.py:425, in Chain.prep_inputs(self, inputs)
423 external_context = self.memory.load_memory_variables(inputs)
424 inputs = dict(inputs, **external_context)
--> 425 self._validate_inputs(inputs)
426 return inputs
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/chains/base.py:179, in Chain._validate_inputs(self, inputs)
177 missing_keys = set(self.input_keys).difference(inputs)
178 if missing_keys:
--> 179 raise ValueError(f"Missing some input keys: {missing_keys}")
ValueError: Missing some input keys: {'input'} | ValueError: Missing some input keys: {'input'} | https://api.github.com/repos/langchain-ai/langchain/issues/10531/comments | 7 | 2023-09-13T09:36:13Z | 2024-05-22T16:07:17Z | https://github.com/langchain-ai/langchain/issues/10531 | 1,894,137,855 | 10,531 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
https://python.langchain.com/docs/integrations/text_embedding/sentence_transformers is unreachable.
### Suggestion:
_No response_ | Can not access to url: https://python.langchain.com/docs/integrations/text_embedding/sentence_transformers | https://api.github.com/repos/langchain-ai/langchain/issues/10530/comments | 3 | 2023-09-13T09:30:31Z | 2023-12-25T16:08:20Z | https://github.com/langchain-ai/langchain/issues/10530 | 1,894,127,953 | 10,530 |
[
"hwchase17",
"langchain"
]
| hi team,
In langchain agent, any recommendations to compress the content? Hoping to reduce the token usage.
Setting max token was not working to reduce the token usage. | compress content when using gpt-4 | https://api.github.com/repos/langchain-ai/langchain/issues/10529/comments | 2 | 2023-09-13T09:20:29Z | 2023-12-20T16:05:41Z | https://github.com/langchain-ai/langchain/issues/10529 | 1,894,107,226 | 10,529 |
[
"hwchase17",
"langchain"
]
| ### Feature request
As of today, if a tool crashes, the whole agent or chain crashes. From a user point-of-view, it is understandable that a specific tool is not available.
### Motivation
The user experience should be maintained if a dependency is broken. Plus, catching by default tool error can enhance the software reliability.
### Relates
- https://github.com/langchain-ai/langchain/issues/8348 | Handle by default `ToolException` | https://api.github.com/repos/langchain-ai/langchain/issues/10528/comments | 2 | 2023-09-13T09:05:35Z | 2024-02-06T16:30:01Z | https://github.com/langchain-ai/langchain/issues/10528 | 1,894,078,446 | 10,528 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Is there any way in langchain to fetch documents from multiple vectorstores, and then combine them to ask the question.
### Suggestion:
_No response_ | Issue: How to retrieve and search from multiple collections or directories? | https://api.github.com/repos/langchain-ai/langchain/issues/10526/comments | 2 | 2023-09-13T07:03:24Z | 2023-12-20T16:05:46Z | https://github.com/langchain-ai/langchain/issues/10526 | 1,893,881,183 | 10,526 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version = 0.0.281
python = 3.11
opensearch-py = 2.3.0
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am trying to do metadata based filtering alongside the query execution using `OpensearchVectorSearch.similarity_search()`. But when I use `metadata_field` and `metadata_filter`, the search doesn't seems to take that into account and still returns results outside of those filters.
Here is myr code:
`response = es.similarity_search( query = "<sample query text>", K =4, metadata_field = "title", metadata_filter = {"match":{"title": "<sample doc title>}}, )`
Here `es` is the `OpenSearchVectorSearch` object for `index1`
The output structure is like this:
`[Document(page_content = ' ', metadata={'vector_field' : [], 'text' : ' ', 'metadata' : {'source' : ' ', 'title' : ' ' }})]`
Here the title I see is not the title I specified in my query.
Steps to reproduce:
1. Create an Opensearch index with multiple documents.
2. Run similarity_search() query with a metadata_field and/or metadata_filter
### Expected behavior
The query should be run against the specified `metadata_field` and `metadata_filter` and in output, I should only see the correct document name I specified in `metadata_field` and `metadata_filter` | Opensearch metadata_field and metadata_filter not working | https://api.github.com/repos/langchain-ai/langchain/issues/10524/comments | 7 | 2023-09-13T05:51:57Z | 2024-04-23T19:05:30Z | https://github.com/langchain-ai/langchain/issues/10524 | 1,893,792,079 | 10,524 |
[
"hwchase17",
"langchain"
]
| ### Feature request
An input for conversational chains to be able to limit their context to a set number of chats
### Motivation
I am in the process of building a document analysis tool using langchain but when the chat chain becomes too long, I just get an error stating that the limit for the no of openai tokens has been reached because the context keeps becoming longer and longer. is there some way i could limit the context to only a certain no of messages and not take all of them in.
### Your contribution
No I am very new to using langchain and having a hard time understanding the codebase. so i am afraid their is nothing i could do to help. | only use past x messages | https://api.github.com/repos/langchain-ai/langchain/issues/10521/comments | 2 | 2023-09-13T02:42:51Z | 2023-12-20T16:05:51Z | https://github.com/langchain-ai/langchain/issues/10521 | 1,893,622,488 | 10,521 |
[
"hwchase17",
"langchain"
]
| hi team,
I am using the Azure openai gpt4-32k as llm in langchain. I implemented openai plugin by agent, but the cost is increasing at an incredible rate. I think the agent would ask gpt4 modal to understand the plugin openapi json that make the token usage increasing. any recommendations to reduce the token usage in agent?
thanks | Reduce azure openai token usage | https://api.github.com/repos/langchain-ai/langchain/issues/10520/comments | 2 | 2023-09-13T01:23:29Z | 2023-12-20T16:05:56Z | https://github.com/langchain-ai/langchain/issues/10520 | 1,893,566,283 | 10,520 |
[
"hwchase17",
"langchain"
]
| ### Feature request
BaseStringMessagePromptTemplate.from_template supports the template_format variable, while BaseStringMessagePromptTemplate.from_template_file does not.
### Motivation
All supported template formats (including Jinja2) should be supported by all template loaders equally.
### Your contribution
I'm not experienced enough with the Langchain codebase to submit PRs at this time. | Support jinja2 template format when using ChatPromptTemplate.from_template_file | https://api.github.com/repos/langchain-ai/langchain/issues/10519/comments | 7 | 2023-09-13T01:10:23Z | 2024-02-09T16:21:28Z | https://github.com/langchain-ai/langchain/issues/10519 | 1,893,555,356 | 10,519 |
[
"hwchase17",
"langchain"
]
| ### System Info
Device name LAPTOP-3BD5HR1V
Processor AMD Ryzen 5 3500U with Radeon Vega Mobile Gfx 2.10 GHz
Installed RAM 20.0 GB (17.9 GB usable)
Device ID F8ACB5C8-80FB-46C6-AE6D-33AD019A5728
Product ID 00325-82110-59554-AAOEM
System type 64-bit operating system, x64-based processor
Pen and touch No pen or touch input is available for this display
Edition Windows 11 Home
Version 22H2
Installed on 10/5/2022
OS build 22621.2134
Serial number PF2WCKPH
Experience Windows Feature Experience Pack 1000.22659.1000.0
Python 3.11.2
langchain 0.0.272
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have created a Python CLI tool called 'dir-diary' that uses Chain.run to make API calls. The tool is built on `click`. When I run the tool from a terminal window with a Python virtual environment activated, the tool works okay. It also appears to work okay from both Linux-based and Windows-based Github Actions runners. But when I run it from a vanilla Windows terminal on my own machine, langchain fails to authenticate with Azure DevOps after several retries.
There's a whole lot of text returned with the error. The most helpful bits are:
.APIError: HTTP code 203 from API.
'Microsoft Internet Explorer's Enhanced Security Configuration is currently enabled on your environment. This enhanced level of security prevents our web integration experiences from displaying or performing correctly. To continue with your operation please disable this configuration or contact your administrator'
'Unable to complete authentication for user due to looping logins'
'Traceback (most recent call last):
File "C:\Users\chris\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\api_requestor.py", line 755, in _interpret_response_line
data = json.loads(rbody)
^^^^^^^^^^^^^^^^^
File "C:\Users\chris\AppData\Local\Programs\Python\Python311\Lib\json\__init__.py", line 335, in loads
raise JSONDecodeError("Unexpected UTF-8 BOM (decode using utf-8-sig)",
json.decoder.JSONDecodeError: Unexpected UTF-8 BOM (decode using utf-8-sig): line 1 column 1 (char 0)'
Steps to reproduce:
I haven't fully figured out the secret to reproducing this yet. Obviously, if it works on a Windows runner, then it's not really a Windows problem. There must be something problematic about my local setup that I can't identify. FWIW, here are my steps:
1. run command `pip install -U dir-diary` from a Windows terminal
2. go to any code project folder
3. run command `summarize`
I have tried running as administrator and turning down the Internet security level through Internet Options in Control Panel, but neither of those solutions fixed the problem.
### Expected behavior
It's supposed to successfully query the API to summarize the project folder. | APIError: HTTP code 203 from API when running from a Click CLI app on a local Windows terminal | https://api.github.com/repos/langchain-ai/langchain/issues/10511/comments | 2 | 2023-09-12T20:33:16Z | 2023-12-19T00:47:23Z | https://github.com/langchain-ai/langchain/issues/10511 | 1,893,219,675 | 10,511 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python 3.10.12
Google Colab
Elasticsearch Cloud 8.9.2
Langchain - latest
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps:
1. Load list of documents
2. Setup ElasticsearchStore of Langchain, with appropriate ES cloud credentials
3. Successfully create index with custom embedding model (HF embedding model, deployed on colab)
4. Deploy ELSER model and run it (with default model id).
5. Try creating index with SparseVectorRetrievalStrategy (ELSER) over the same list of documents.
6. Tried to change timeout, but didn't effect the outcome.
7. NOTE: It does start uploading docs and docs count is increasing, but it stops after about 10 sec. I tried to run the ELSER model on 3 nodes, but nothing changed.
### Expected behavior
WARNING:elastic_transport.node_pool:Node <Urllib3HttpNode([https://-------.us-central1.gcp.cloud.es.io:443](https:/------.us-central1.gcp.cloud.es.io/))> has failed for 1 times in a row, putting on 1 second timeout
---------------------------------------------------------------------------
ConnectionTimeout Traceback (most recent call last)
[<ipython-input-92----------cb>](https://-------colab.googleusercontent.com/outputframe.html?vrz=colab_20230911-060143_RC00_564310758#) in <cell line: 1>()
----> 1 elastic_elser_search = ElasticsearchStore.from_documents(
2 documents=split_texts,
3 es_cloud_id="cloudid",
4 index_name="search-tmd-elser",
5 es_user="elastic",
10 frames
[/usr/local/lib/python3.10/dist-packages/elastic_transport/_node/_http_urllib3.py](https://---XXXX-----0-colab.googleusercontent.com/outputframe.html?vrz=colab_---XXXX-----#) in perform_request(self, method, target, body, headers, request_timeout)
197 exception=err,
198 )
--> 199 raise err from None
200
201 meta = ApiResponseMeta(
ConnectionTimeout: Connection timed out | Elasticsearch ELSER Timeout | https://api.github.com/repos/langchain-ai/langchain/issues/10506/comments | 5 | 2023-09-12T19:32:37Z | 2024-01-30T00:41:10Z | https://github.com/langchain-ai/langchain/issues/10506 | 1,893,131,951 | 10,506 |
[
"hwchase17",
"langchain"
]
| ### Feature request
## Description
Currently, the SQLDatabaseChain class is designed to optionally return intermediate steps taken during the SQL command generation and execution. These intermediate steps are helpful in understanding the processing flow, especially during debugging or for logging purposes. However, these intermediate steps do not store the SQL results obtained at various steps, which could offer deeper insights and can aid in further optimizations or analyses.
This feature request proposes to enhance the SQLDatabaseChain class to save SQL results from intermediate steps into a dictionary, akin to how SQL commands are currently stored. This would not only facilitate a more comprehensive view of each step but also potentially help in identifying and fixing issues or optimizing the process further.
### Motivation
#### Insightful Debugging:
Storing SQL results in intermediate steps will facilitate deeper insights during debugging, helping to pinpoint the exact step where a potential issue might be occurring.
#### Enhanced Logging:
Logging the SQL results at each step can help in creating a more detailed log, which can be instrumental in analyzing the performance and identifying optimization opportunities.
Improved Analysis and Optimization: With the SQL results available at each step, it becomes feasible to analyze the results at different stages, which can be used to further optimize the SQL queries or the process flow.
### Your contribution
I propose to contribute to implementing this feature by:
#### Code Adaptation:
Modifying the _call_ method in the SQLDatabaseChain class to include SQL results in the intermediate steps dictionary, similar to how sql_cmd is currently being saved.
#### Testing:
Developing appropriate unit tests to ensure the correct functioning of the new feature, and that it does not break the existing functionality.
#### Documentation:
Updating the documentation to include details of the new feature, illustrating how to use it and how it can benefit the users.
#### Optimization:
Once implemented, analyzing the stored results to propose further optimizations or enhancements to the Langchain project.
## Proposed Changes
In the _call_ method within the SQLDatabaseChain class:
Amend the intermediate steps dictionary to include a new key, say sql_result, where the SQL results at different stages would be saved.
During the SQL execution step, save the SQL result into the sql_result key in the intermediate steps dictionary, similar to how sql_cmd is being saved currently.
```
if not self.use_query_checker:
...
intermediate_steps.append({"sql_cmd": sql_cmd, "sql_result": str(result)}) # Save sql result here
else:
...
intermediate_steps.append({"sql_cmd": checked_sql_command, "sql_result": str(result)}) # Save sql result here
```
I believe that this contribution would be a valuable addition to the Langchain project, and I am eager to collaborate with the team to make it a reality.
Looking forward to hearing your thoughts on this proposal. | Enhance SQLDatabaseChain with SQL Results in Intermediate Steps Dictionary | https://api.github.com/repos/langchain-ai/langchain/issues/10500/comments | 2 | 2023-09-12T15:11:12Z | 2023-12-19T00:47:27Z | https://github.com/langchain-ai/langchain/issues/10500 | 1,892,729,571 | 10,500 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
When I run the code I don't get any errors but I also don't get any output in the terminal or output area either? Can you help?

### Suggestion:
_No response_ | Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/10497/comments | 4 | 2023-09-12T14:15:52Z | 2023-12-19T00:47:33Z | https://github.com/langchain-ai/langchain/issues/10497 | 1,892,622,763 | 10,497 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version = 0.0.286
Python=3.8.8
MacOs
I am working on a **ReAct agent with Memory and Tools** that should stop and ask a human for input.
I worked off this article in the documentation: https://python.langchain.com/docs/modules/memory/agent_with_memory
On Jupyter Notebook it works well when the agent stops and picks up the "Observation" from the human.
Now I am trying to bring this over to Streamlit and am struggling with having the agent wait for the observation.
As one can see in the video, the output is brought over into the right streamlit container, yet doesn't stop to get the human feedback.
I am using a custom output parser and the recommended StreamlitCallbackHandler.
https://github.com/langchain-ai/langchain/assets/416379/ed57834a-2a72-4938-b901-519f0748dd95
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
My output parser looks like this:
```
class CustomOutputParser(AgentOutputParser):
def parse(self, llm_output: str) -> Union[AgentAction, AgentFinish]:
# Check if agent should finish
print(llm_output)
if "Final Answer:" in llm_output:
print("Agent should finish")
return AgentFinish(
# Return values is generally always a dictionary with a single `output` key
# It is not recommended to try anything else at the moment :)
return_values={"output": llm_output.split(
"Final Answer:")[-1].strip()},
log=llm_output,
)
# Parse out the action and action input
regex = r"Action\s*\d*\s*:(.*?)\nAction\s*\d*\s*Input\s*\d*\s*:[\s]*(.*)"
match = re.search(regex, llm_output, re.DOTALL)
if not match:
print("Parsing Action Input")
return AgentFinish(
# Return values is generally always a dictionary with a single `output` key
# It is not recommended to try anything else at the moment :)
return_values={"output": llm_output},
log=llm_output,
)
# raise ValueError(f"Could not parse LLM output: `{llm_output}`")
action = match.group(1).strip()
action_input = match.group(2)
#This can't be agent finish because otherwise the agent stops working.
return AgentAction(tool=action, tool_input=action_input.strip(" ").strip('"'), log=llm_output)
```
### Expected behavior
The agent should wait for streamlit to create an input_chat and use this as the feedback from the "human" tool | Observation: Human is not a valid tool, try one of [human, Search, Calculator] | https://api.github.com/repos/langchain-ai/langchain/issues/10494/comments | 3 | 2023-09-12T13:57:04Z | 2023-12-19T00:47:38Z | https://github.com/langchain-ai/langchain/issues/10494 | 1,892,585,572 | 10,494 |
[
"hwchase17",
"langchain"
]
| ### System Info
Using LangChain 0.0.276
Python 3.11.4
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Construct a FlareChain instance like this and run it:
```
myllm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo-16k")
flare = FlareChain.from_llm(
llm=myllm,
retriever=vectorstore.as_retriever(),
max_generation_len=164,
min_prob=0.3,
)
result = flare.run(querytext)
```
When I inspect during debugging, the specified LLM model was set on `flare.question_generator_chain.llm.model_name` but NOT `flare.response_chain.llm.model_name`,
which is still the default value.
### Expected behavior
I'm expecting `flare.response_chain.llm.model_name` to return `gpt-3.5-turbo-16k`, not `text-davinci-003` | FlareChain's response_chain not picking up specified LLM model | https://api.github.com/repos/langchain-ai/langchain/issues/10493/comments | 9 | 2023-09-12T13:09:18Z | 2024-01-15T16:57:52Z | https://github.com/langchain-ai/langchain/issues/10493 | 1,892,491,559 | 10,493 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am getting this error when using langchain vectorstores similarity search on local machine. `pinecone.core.client.exceptions.ApiTypeError: Invalid type for variable 'namespace'. Required value type is str and passed type was NoneType at ['namespace']`. But it is working fine on Google Colab.
### Suggestion:
_No response_ | Issue: pinecone.core.client.exceptions.ApiTypeError: Invalid type for variable 'namespace'. Required value type is str and passed type was NoneType at ['namespace'] | https://api.github.com/repos/langchain-ai/langchain/issues/10489/comments | 2 | 2023-09-12T11:02:56Z | 2023-09-13T06:20:43Z | https://github.com/langchain-ai/langchain/issues/10489 | 1,892,270,542 | 10,489 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain:0.0.286
python:3.10.10
redis:5.0.0b4
### Who can help?
@hwc
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
rds = Redis.from_texts(
texts,
embeddings,
metadatas=metadata,
redis_url="XXXXX",
index_name="XXXX"
)
The following exception occurred:
AttributeError: 'RedisCluster' object has no attribute 'module_list'
The version of my redis package is 5.0.0b4.
An error occurred in the following code:
langchain\Lib\site-packages\langchain\utilities\redis.py
def check_redis_module_exist(client: RedisType, required_modules: List[dict]) -> None:
"""Check if the correct Redis modules are installed."""
-> installed_modules = client.module_list()
### Expected behavior
redis init success | Redis vector init error | https://api.github.com/repos/langchain-ai/langchain/issues/10487/comments | 14 | 2023-09-12T09:59:40Z | 2023-12-26T16:06:02Z | https://github.com/langchain-ai/langchain/issues/10487 | 1,892,138,220 | 10,487 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Hey guys!
Thanks for the great tool you've developed.
LLama now supports device and so is GPT4All:
https://docs.gpt4all.io/gpt4all_python.html#gpt4all.gpt4all.GPT4All.__init__
Can you guys please add the device property to the file: "langchain/llms/gpt4all.py"
LN 96:
`
device: Optional[str] = Field("cpu", alias="device")
"""Device name: cpu, gpu, nvidia, intel, amd or DeviceName."""
`
Model Init:
`
values["client"] = GPT4AllModel(
model_name,
model_path=model_path or None,
model_type=values["backend"],
allow_download=values["allow_download"],
device=values["device"]
)
`
### Motivation
Necessity to use the device on GPU powered machines.
### Your contribution
None.. :( | Add device to GPT4All | https://api.github.com/repos/langchain-ai/langchain/issues/10486/comments | 0 | 2023-09-12T09:02:19Z | 2023-10-04T00:37:32Z | https://github.com/langchain-ai/langchain/issues/10486 | 1,892,030,554 | 10,486 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Langchain is still using the deprecated huggingface_hub `InferenceApi` in the latest version. the `InferenceApi` will be removed from version '0.19.0'.
```
/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_deprecation.py:127: FutureWarning: '__init__' (from 'huggingface_hub.inference_api') is deprecated and will be removed from version '0.19.0'. `InferenceApi` client is deprecated in favor of the more feature-complete `InferenceClient`. Check out this guide to learn how to convert your script to use it: https://huggingface.co/docs/huggingface_hub/guides/inference#legacy-inferenceapi-client.
warnings.warn(warning_message, FutureWarning)
```
### Suggestion:
It it recommended to use the new `InferenceClient` in huggingface_hub. | Issue: Use huggingface_hub InferenceClient instead of InferenceAPI | https://api.github.com/repos/langchain-ai/langchain/issues/10483/comments | 3 | 2023-09-12T08:37:39Z | 2024-03-29T16:06:25Z | https://github.com/langchain-ai/langchain/issues/10483 | 1,891,974,960 | 10,483 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi Team,
I have a fixed elasticsearch version 7.6 which i cannot upgrade. could you please share me some details about which version of langchain supports mentioned version.
Problem with the latest langchain i have faced, similarity search or normal search says that KNN is not available. "Unexpected keyword argument called 'knn'".
if possible please share a sample code to connect with the existing elastic search and create an index to update the Elasticsearch data to Lang chain supported data format or document format.
### Suggestion:
_No response_ | Issue: Which version of langchain supports the elasticsearch 7.6 | https://api.github.com/repos/langchain-ai/langchain/issues/10481/comments | 22 | 2023-09-12T07:49:46Z | 2024-03-26T16:05:36Z | https://github.com/langchain-ai/langchain/issues/10481 | 1,891,889,704 | 10,481 |
[
"hwchase17",
"langchain"
]
| ### System Info
python == 3.11
langchain == 0.0.286
windows 10
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.chat_models import AzureChatOpenAI
from langchain.agents.agent_types import AgentType
from langchain.agents import create_pandas_dataframe_agent
llm = AzureChatOpenAI(
deployment_name = "gpt-4",
model_name = "gpt-4",
openai_api_key = '...',
openai_api_version = "2023-08-01-preview",
openai_api_base = '...',
temperature = 0
)
df = pd.DataFrame({
'Feature1': np.random.rand(1000000),
'Feature2': np.random.rand(1000000),
'Class': np.random.choice(['Class1', 'Class2', 'Class3'], 1000000)
})
agent = create_pandas_dataframe_agent(
llm,
df,
verbose=False,
agent_type=AgentType.OPENAI_FUNCTIONS,
reduce_k_below_max_tokens=True,
max_execution_time = 1,
)
agent.run('print 100 first rows in dataframe')
```
### Expected behavior
The `max_execution_time` is set to 1, indicating that the query should run for one second before stopping. However, it currently runs for approximately 10 seconds before stopping. This is a simple example, but in the case of the actual dataframe that I have (which contains a lot of textual data), the agent runs for around one minute before I receive the results. At the same time, if the query doesn't request a large amount of data from the model to output, the agent would stop in one second. For instance, if my query is agent.run('give some examples of delays mention?'), the results would not be returned because the max_execution_time is 1, and it needs roughly three seconds to output the results. Therefore, this troubleshooting indicates that there's an issue with the `max_execution_time` when the requested output is too lengthy. | max_execution_time does not work for some queries in create_pandas_dataframe_agent | https://api.github.com/repos/langchain-ai/langchain/issues/10479/comments | 3 | 2023-09-12T07:24:52Z | 2023-12-19T00:47:52Z | https://github.com/langchain-ai/langchain/issues/10479 | 1,891,850,817 | 10,479 |
[
"hwchase17",
"langchain"
]
| ### System Info
```
@router.post('/web-page')
def web_page_embedding(model: WebPageEmbedding):
try:
data = download_page(model.page)
return {'success': True}
except Exception as e:
return Response(str(e))
def download_page(url: str):
loader = AsyncChromiumLoader(urls=[url])
docs = loader.load()
return docs
```
I am trying to download the page content using the above FastAPI code. But I am facing this `NotImplementedError` error
```
Task exception was never retrieved
future: <Task finished name='Task-6' coro=<Connection.run() done, defined at E:\Projects\abcd\venv\Lib\site-packages\playwright\_impl\_connection.py:264> exception=NotImplementedError()>
Traceback (most recent call last):
File "E:\Projects\abcd\venv\Lib\site-packages\playwright\_impl\_connection.py", line 271, in run
await self._transport.connect()
File "E:\Projects\abcd\venv\Lib\site-packages\playwright\_impl\_transport.py", line 135, in connect
raise exc
File "E:\Projects\abcd\venv\Lib\site-packages\playwright\_impl\_transport.py", line 123, in connect
self._proc = await asyncio.create_subprocess_exec(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hasan\AppData\Local\Programs\Python\Python311\Lib\asyncio\subprocess.py", line 218, in create_subprocess_exec
transport, protocol = await loop.subprocess_exec(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hasan\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 1694, in subprocess_exec
transport = await self._make_subprocess_transport(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\hasan\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 502, in _make_subprocess_transport
raise NotImplementedError
NotImplementedError
```
I have also tried with with async await which directly call the async method of the loader and this also not working
```
@router.post('/web-page-1')
async def web_page_embedding_async(model: WebPageEmbedding):
try:
data = await download_page_async(model.page)
return {'success': True}
except Exception as e:
return Response(str(e))
async def download_page_async(url: str):
loader = AsyncChromiumLoader(urls=[url])
# docs = loader.load()
docs = await loader.ascrape_playwright(url)
return docs
```
But If I try to download the page in a python script it working as expected (both async and non-async)
```
if __name__ == '__main__':
try:
url = 'https://python.langchain.com/docs/integrations/document_loaders/async_chromium'
# d = download_page(url) # working
d = asyncio.run(download_page_async(url)) # also working
print(len(d))
except Exception as e:
print(e)
```
Packages:
- langchain==0.0.284
- playwright==1.37.0
- fastapi==0.103.1
- uvicorn==0.23.2
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
Please run the code
### Expected behavior
Loader should work in FastAPI environment | AsyncChromiumLoader not working with FastAPI | https://api.github.com/repos/langchain-ai/langchain/issues/10475/comments | 10 | 2023-09-12T05:03:16Z | 2024-04-04T15:35:52Z | https://github.com/langchain-ai/langchain/issues/10475 | 1,891,676,241 | 10,475 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version: 0.0.286
Python version: 3.11.2
Platform: MacOS Ventura 13.5.1 M1 chip
Weaviate 1.21.2 as vectorstore
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When following LangChain's documentation for [ Weaviate Self-Query Retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/weaviate_self_query),
I get the following Warning:
```
/opt/homebrew/lib/python3.11/site-packages/langchain/chains/llm.py:278: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.
warnings.warn(
```
and the following errors
```
ValueError: Received disallowed comparator gte. Allowed comparators are [<Comparator.EQ: 'eq'>]
...
... stack trace
...
File "/opt/homebrew/lib/python3.11/site-packages/langchain/chains/query_constructor/base.py", line 52, in parse
raise OutputParserException(
langchain.schema.output_parser.OutputParserException: Parsing text
``json
{
"query": "natural disasters",
"filter": "and(gte(\"published_at\", \"2022-10-01\"), lte(\"published_at\", \"2022-10-07\"))"
}
``
raised following error:
Received disallowed comparator gte. Allowed comparators are [<Comparator.EQ: 'eq'>]
```
The following code led to the errors
```
import os, openai, weaviate, logging
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Weaviate
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.weaviate import WeaviateTranslator
openai.api_key = os.environ['OPENAI_API_KEY']
embeddings = OpenAIEmbeddings()
client = weaviate.Client(
url = WEAVIATE_URL,
additional_headers = {
"X-OpenAI-Api-Key": openai.api_key
}
)
weaviate = Weaviate(
client = client,
index_name = INDEX_NAME,
text_key = "article_body"
)
metadata_field_info = [ # Shortened for brevity
AttributeInfo(
name="published_at",
description="Date article was published",
type="date",
),
AttributeInfo(
name="weblink",
description="The URL where the document was taken from.",
type="string",
),
AttributeInfo(
name="keywords",
description="A list of keywords from the piece of text.",
type="string",
),
]
logging.basicConfig()
logging.getLogger('langchain.retrievers.self_query').setLevel(logging.INFO)
document_content_description = "News articles"
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm,
weaviate,
document_content_description,
metadata_field_info,
enable_limit = True,
verbose=True,
)
returned_docs_selfq = retriever.get_relevant_documents(question)
```
### Expected behavior
No warnings or errors, or documentation stating what output parser replicates the existing functionality. Specifically picking up date range filters from user queries | Error when using Self Query Retriever with Weaviate | https://api.github.com/repos/langchain-ai/langchain/issues/10474/comments | 2 | 2023-09-12T04:46:10Z | 2023-12-19T00:47:57Z | https://github.com/langchain-ai/langchain/issues/10474 | 1,891,662,372 | 10,474 |
[
"hwchase17",
"langchain"
]
| Currently, there is no support for agents that have both:
1) Conversational history
2) Structured tool chat (functions with multiple inputs/parameters)
#3700 mentions this as well but it was not resolved, `AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION` is zero_shot, and essentially has [no memory](https://stackoverflow.com/questions/76906469/langchain-zero-shot-react-agent-uses-memory-or-not). The langchain docs for [structured tool chat](https://python.langchain.com/docs/modules/agents/agent_types/structured_chat) the agent have a sense of memory through creating one massive input prompt. Still, this agent was performing much worse as #3700 mentions and other agents do not support multi input tools, even after creating [custom tools](https://python.langchain.com/docs/modules/agents/tools/custom_tools).
MY SOLUTION:
1) Use ConversationBufferMemory to keep track of chat history.
2) Convert these messages to a format OpenAI wants for their API.
3) Use the OpenAI chat completion endpoint, that has support for function calling
Usage: `chatgpt_function_response(user_prompt)`
- Dynamo db and session id stuff comes from the [docs](https://python.langchain.com/docs/integrations/memory/dynamodb_chat_message_history)
- `memory.py` handles getting the chat history for a particular session (can be interpreted as a user). We use ConversationBufferMemory as we usually would and add a helper method to convert the ConversationBufferMemory to a [format that OpenAI wants](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_call_functions_with_chat_models.ipynb)
- `core.py` handles the main functionality with a user prompt. We add the user's prompt to the message history, and get the message history in the OpenAI format. We use the chat completion endpoint as normal, and add the function response call to the message history as an AI message.
- `functions.py` is also how we would normally use the chat completions API, also described [here](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_call_functions_with_chat_models.ipynb)
`memory.py`
```
import logging
from typing import List
import boto3
from langchain.memory import ConversationBufferMemory
from langchain.memory.chat_message_histories import DynamoDBChatMessageHistory
from langchain.schema.messages import SystemMessage
from langchain.adapters.openai import convert_message_to_dict
TABLE_NAME = "your table name"
# if using dynamodb
session = boto3.session.Session(
aws_access_key_id="",
aws_secret_access_key="",
region_name="",
)
def get_memory(session_id: str):
"""Get a conversation buffer with chathistory saved to dynamodb
Returns:
ConversationBufferMemory: A memory object with chat history saved to dynamodb
"""
# Define the necessary components with the dynamodb endpoint
message_history = DynamoDBChatMessageHistory(
table_name=TABLE_NAME,
session_id=session_id,
boto3_session=session,
)
# if you want to add a system prompt
if len(message_history.messages) == 0:
message_history.add_message(SystemMessage(content="whatever system prompt"))
memory = ConversationBufferMemory(
memory_key="chat_history", chat_memory=message_history, return_messages=True
)
logging.info(f"Memory: {memory}")
return memory
def convert_message_buffer_to_openai(memory: ConversationBufferMemory) -> List[dict]:
"""Convert a message buffer to a list of messages that OpenAI can understand
Args:
memory (ConversationBufferMemory): A memory object with chat history saved to dynamodb
Returns:
List[dict]: A list of messages that OpenAI can understand
"""
messages = []
for message in memory.buffer_as_messages:
messages.append(convert_message_to_dict(message))
return messages
```
`core.py`
```
def _handle_function_call(response: dict) -> str:
response_message = response["message"]
function_name = response_message["function_call"]["name"]
function_to_call = function_names[function_name]
function_args = json.loads(response_message["function_call"]["arguments"])
function_response = function_to_call(**function_args)
return function_response
def chatgpt_response(prompt, model=MODEL, session_id: str = SESSION_ID) -> str:
memory = get_memory(session_id)
memory.chat_memory.add_user_message(prompt)
messages = convert_message_buffer_to_openai(memory)
logging.info(f"Memory: {messages}")
response = openai.ChatCompletion.create(
model=model,
messages=messages,
)
answer = response["choices"][0]["message"]["content"]
memory.chat_memory.add_ai_message(answer)
return answer
def chatgpt_function_response(
prompt: str,
functions=function_descriptions,
model=MODEL,
session_id: str = SESSION_ID,
) -> str:
memory = get_memory(session_id)
memory.chat_memory.add_user_message(prompt)
messages = convert_message_buffer_to_openai(memory)
logging.info(f"Memory for function response: {messages}")
response = openai.ChatCompletion.create(
model=model,
messages=messages,
functions=functions,
)["choices"][0]
if response["finish_reason"] == "function_call":
answer = _handle_function_call(response)
else:
answer = response["message"]["content"]
memory.chat_memory.add_ai_message(answer)
return answer
```
`functions.py`
```
def create_reminder(
task: str, days: int, hours: int, minutes: int
) -> str:
return 'whatever'
function_names = {
"create_reminder": create_reminder,
}
function_descriptions = [
{
"name": "create_reminder",
"description": "This function handles the logic for creating a reminder for a "
"generic task at a given date and time.",
"parameters": {
"type": "object",
"properties": {
"task": {
"type": "string",
"description": "The task to be reminded of, such as 'clean the "
"house'",
},
"days": {
"type": "integer",
"description": "The number of days from now to be reminded",
},
"hours": {
"type": "integer",
"description": "The number of hours from now to be reminded",
},
"minutes": {
"type": "integer",
"description": "The number of minutes from now to be reminded",
},
},
"required": ["task", "days", "hours", "minutes"],
},
},
]
``` | How to add structured tools / functions with multiple inputs | https://api.github.com/repos/langchain-ai/langchain/issues/10473/comments | 11 | 2023-09-12T04:31:04Z | 2024-03-18T16:05:29Z | https://github.com/langchain-ai/langchain/issues/10473 | 1,891,650,757 | 10,473 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version: 0.0.279
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The key issue causing the error is the import statement of **BaseModel**. In the official example, the package is imported as **from pydantic import BaseModel, Field**, but in the langchain source code at _langchain\chains\openai_functions\qa_with_structure.py_, it's imported as **from langchain.pydantic_v1 import BaseModel, Field**. The inconsistency between these two package names results in an error when executing create_qa_with_structure_chain().
Below is an error example.
``` python
import os
from typing import List
from langchain import PromptTemplate
from langchain.chains.openai_functions import create_qa_with_structure_chain
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.schema import SystemMessage, HumanMessage
from pydantic import BaseModel, Field
os.environ["OPENAI_API_KEY"] = "xxxx"
llm = ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613")
class CustomResponseSchema(BaseModel):
"""An answer to the question being asked, with sources."""
answer: str = Field(..., description="Answer to the question that was asked")
countries_referenced: List[str] = Field(
..., description="All of the countries mentioned in the sources"
)
sources: List[str] = Field(
..., description="List of sources used to answer the question"
)
doc_prompt = PromptTemplate(
template="Content: {page_content}\nSource: {source}",
input_variables=["page_content", "source"],
)
prompt_messages = [
SystemMessage(
content=(
"You are a world class algorithm to answer "
"questions in a specific format."
)
),
HumanMessage(content="Answer question using the following context"),
HumanMessagePromptTemplate.from_template("{context}"),
HumanMessagePromptTemplate.from_template("Question: {question}"),
HumanMessage(
content="Tips: Make sure to answer in the correct format. Return all of the countries mentioned in the "
"sources in uppercase characters. "
),
]
chain_prompt = ChatPromptTemplate(messages=prompt_messages)
qa_chain_pydantic = create_qa_with_structure_chain(
llm, CustomResponseSchema, output_parser="pydantic", prompt=chain_prompt
)
query = "What did he say about russia"
qa_chain_pydantic.run({"question": query, "context": query})
```
### Expected behavior
It is hoped that the package names can be standardized | The exception 'Must provide a pydantic class for schema when output_parser is 'pydantic'.' is caused by the inconsistent package name of BaseModel | https://api.github.com/repos/langchain-ai/langchain/issues/10472/comments | 2 | 2023-09-12T03:35:02Z | 2023-12-19T00:48:02Z | https://github.com/langchain-ai/langchain/issues/10472 | 1,891,606,072 | 10,472 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Currently, Unstructured loaders allow users to process elements when loading the document. This is done by applying user-specified `post_processors` to each element. These post processing functions are str -> str callables.
When using Unstructured loaders, allow element processing using `(Element) -> Element` or `(Element) -> str` callables.
### Motivation
A user using `UnstructuredPDFLoader` wants to take advantage of the inferred table structure when processing elements. They can't use the `post_processors` argument to access `element.metadata.text_as_html` because the input to each `post_processors` callable is a string:
>I'm finding that the mode='elements' option already does str(element) to every element, so I can't really use element.metadata.text_as_html
They evaluated this workaround:
```
class CustomPDFLoader(UnstructuredPDFLoader):
def __init__(
self,
*args,
pre_processors: list[Callable[[elmt.Element], str]] | None,
**kwargs,
) -> None:
super().__init__(*args, **kwargs)
self.pre_processors = pre_processors
def _pre_process_elements(self, elements: list[elmt.Element]) -> elmt.Element:
for element in elements:
for cleaner in self.pre_processors:
element.text = cleaner(element)
def load(self) -> str:
if self.mode != "single":
raise ValueError(f"mode of {self.mode} not supported.")
elements = self._get_elements()
self._pre_process_elements(elements)
metadata = self._get_metadata()
text = "\n\n".join([str(el) for el in elements])
docs = [Document(page_content=text, metadata=metadata)]
return docs
```
The intent is for the `_pre_process_elements` method above to replace the call to `_post_process_elements` in the second line of the [original load function](https://github.com/langchain-ai/langchain/blob/737b75d278a0eef8b3b9002feadba69ffe50e1b1/libs/langchain/langchain/document_loaders/unstructured.py#L87). Using this workaround would require copying the rest of the `load` method's code in the subclass, too.
### Your contribution
The team at Unstructured can investigate this request and submit a PR if needed. | Make entire element accessible for processing when loading with Unstructured loaders | https://api.github.com/repos/langchain-ai/langchain/issues/10471/comments | 1 | 2023-09-12T02:02:20Z | 2023-12-19T00:48:07Z | https://github.com/langchain-ai/langchain/issues/10471 | 1,891,540,586 | 10,471 |
[
"hwchase17",
"langchain"
]
| ### System Info
It looks like BedrockChat was removed from the chat_models/__init__.py when ChatKonko was added in this commit: https://github.com/langchain-ai/langchain/pull/10267/commits/280c1e465c4b89c6313fcc2c0679e3756b8566f9#diff-04148cb9262d722a69b81a119e1f8120515532263a1807239f60f00d9ff2a755
I'm guessing this was accidental, because the BedrockChat class definitions still exist.
@agola11 @hwchase17
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
.
### Expected behavior
I expect `from langchain.chat_models import BedrockChat` to work | BedrockChat model mistakenly removed in latest version? | https://api.github.com/repos/langchain-ai/langchain/issues/10468/comments | 4 | 2023-09-12T00:32:49Z | 2023-10-03T19:51:12Z | https://github.com/langchain-ai/langchain/issues/10468 | 1,891,477,538 | 10,468 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hi. I've a vectorstore which has embeddings from chunks of documents. I've used FAISS to create my vector_db. As metadata I've 'document_id', 'chunk_id', 'source'.
But now I want to run a summarizer to extract a summary for each document_id and put it as a new metadata for each chunk.
How can I do it?
The only way I've found out was to process everything all over again, but now extracting the summary as a new step from the pipeline...but that's not ideal....
### Suggestion:
_No response_ | Issue: Add new metadata to document_ids already saved in vectorstore (FAISS) | https://api.github.com/repos/langchain-ai/langchain/issues/10463/comments | 3 | 2023-09-11T20:55:58Z | 2023-12-19T00:48:13Z | https://github.com/langchain-ai/langchain/issues/10463 | 1,891,252,685 | 10,463 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version: 0.0.286
Python version: 3.11.2
Platform: x86_64 Debian 12.2.0-14
Weaviate 1.21.2 as vectorstore
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When following LangChain's documentation for[ Weaviate Self-Query Retriever](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/weaviate_self_query),
I get the following Warning:
```
/langchain/chains/llm.py:278: UserWarning: The predict_and_parse method is deprecated, instead pass an output parser directly to LLMChain.
warnings.warn(
```
The following code led to the warning, although retrieving documents as expected:
```
import os, openai, weaviate, logging
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Weaviate
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.retrievers.self_query.weaviate import WeaviateTranslator
openai.api_key = os.environ['OPENAI_API_KEY']
embeddings = OpenAIEmbeddings()
client = weaviate.Client(
url = WEAVIATE_URL,
additional_headers = {
"X-OpenAI-Api-Key": openai.api_key
}
)
weaviate = Weaviate(
client = client,
index_name = INDEX_NAME,
text_key = "text",
by_text = False,
embedding = embeddings,
)
metadata_field_info = [ # Shortened for brevity
AttributeInfo(
name="text",
description="This is the main content of text.",
type="string",
),
AttributeInfo(
name="source",
description="The URL where the document was taken from.",
type="string",
),
AttributeInfo(
name="keywords",
description="A list of keywords from the piece of text.",
type="string",
),
]
logging.basicConfig()
logging.getLogger('langchain.retrievers.self_query').setLevel(logging.INFO)
document_content_description = "Collection of Laws and Code documents, including the Labor Code and related Laws."
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(
llm,
weaviate,
document_content_description,
metadata_field_info,
enable_limit = True,
verbose=True,
)
returned_docs_selfq = retriever.get_relevant_documents(question)
```
### Expected behavior
No Warnings and/or updated documentation instructing how to pass the output parser to LLMChain | User Warning when using Self Query Retriever with Weaviate | https://api.github.com/repos/langchain-ai/langchain/issues/10462/comments | 2 | 2023-09-11T20:04:55Z | 2023-12-18T23:45:57Z | https://github.com/langchain-ai/langchain/issues/10462 | 1,891,181,081 | 10,462 |
[
"hwchase17",
"langchain"
]
| I am trying to trace my LangChain runs by using LangChain Tracing Native Support on my local host, I created a session named agent_workflow and tried to receive the runs on it but it didn't work.
The problem is that whenever I run the RetrievalQA chain it gives me the following error:
`Error in LangChainTracerV1.on_chain_end callback: Unknown run type: retriever`
This is the code snippet specifying the problem:
```
os.environ["LANGCHAIN_TRACING"] = "true"
os.environ["LANGCHAIN_SESSION"] = "agent_workflow"
embed = OpenAIEmbeddings(
model=self.embedding_model_name
)
vectorStore = Chroma.from_documents(texts,embed)
def retrieval(self,question):
qa = RetrievalQA.from_chain_type(
llm,
chain_type="stuff",
retriever= vectorStore.as_retriever(k=1),
verbose=True,
chain_type_kwargs={
"verbose":True,
"prompt":prompt,
"memory": memory,
}
)
with get_openai_callback() as cb:
response = qa.run({"query":question})
return qa.run({"query":question})
```
How can I solve this? I saw a tutorial where it worked with initialized_agent instead of RetrievalQA but don't know whether this is the case or not.
| Issue: Error in LangChainTracerV1.on_chain_end callback: Unknown run type: | https://api.github.com/repos/langchain-ai/langchain/issues/10460/comments | 5 | 2023-09-11T19:17:54Z | 2023-12-20T16:06:11Z | https://github.com/langchain-ai/langchain/issues/10460 | 1,891,117,462 | 10,460 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
The following raises a `ValidationException: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: expected maximum item count: 1, found: 2, please reformat your input and try again.`:
```
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
from langchain.prompts import PromptTemplate
from langchain.llms.bedrock import Bedrock
llm = Bedrock(
client=bedrock_client,
model_id="ai21.j2-ultra",
model_kwargs={"temperature": 0.9, "maxTokens": 500, "topP": 1, "stopSequences": ["\\n\\nHuman:", "\n\nAI:"]
})
prompt_template = PromptTemplate(template="{history}Human:I want to know how to write a story.\nAssistant: What genre do you want to write the story in?\n\nHuman: {input}", input_variables=['history', 'input'])
conversation = ConversationChain(
llm=llm, verbose=True, memory=ConversationBufferMemory(),prompt=prompt_template
)
conversation.predict(input="I want to write a horror story.")
```
This code works when only one stop sequence is passed.
The issue seems to be coming from within the Bedrock `invoke_model` call as I tried the same thing in Bedrock playground and received the same error.
### Suggestion:
Bedrock team needs to be contacted for this one. | Issue: Cannot pass more than one stop sequence to AI21 Bedrock model | https://api.github.com/repos/langchain-ai/langchain/issues/10456/comments | 2 | 2023-09-11T17:07:23Z | 2023-12-18T23:46:09Z | https://github.com/langchain-ai/langchain/issues/10456 | 1,890,923,908 | 10,456 |
[
"hwchase17",
"langchain"
]
| ### Feature request
While other model parameters for Anthropic are provided as class variables, `stop_sequence` does not for `_AnthropicCommon` class, so you can only send `stop` in the `generate` call. And `generate` manually adds the stop sequences to the parameters before the call to Anthropic.
I suggest having `stop` as a class level parameters so it can be supplied during the creation of the `ChatAnthropic` class for example, like:
```
ChatAnthropic(
anthropic_api_key=api_token,
model=model,
temperature=temperature,
top_k=top_k,
top_p=top_p,
default_request_timeout=default_request_timeout,
max_tokens_to_sample=max_tokens_to_sample,
verbose=verbose,
stop=stop_sequences,
)
```
The changes required for this will be adding the class variable to the `_AnthropicCommon` class and changing the `_default_params` property like so:
```
@property
def _default_params(self) -> Mapping[str, Any]:
"""Get the default parameters for calling Anthropic API."""
d = {
"max_tokens_to_sample": self.max_tokens_to_sample,
"model": self.model,
}
if self.temperature is not None:
d["temperature"] = self.temperature
...
if self.stop_sequences is not None:
d["stop_sequences"] = self.stop_sequences
```
This would enable the addition of stop sequences directly to the model call through the creation of the chat-model object while still keeping the current functionality to also pass it in the generate call for `ConversationChain` if the user so desires (also, under what cases would a user pass stop in the generate call if its already available as a class variable?). This is especially useful because `ConversationalRetrievalChain` doesn't provide `stop` in its own call so addition of this would also enable keeping the behaviour similar across the different chains for a model.
So with `ConversationalRetrievalChain`, now the LLM would have the stop sequences already present which you can't currently pass like for `ConversationChain`:
```
ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=knowledge_base.retriever,
chain_type=chain_type,
verbose=verbose,
memory=conversation_memory,
return_source_documents=True
)
```
I would be happy to create a PR for this, just wanted to see some feedback/support, and see if someone has any counter points to this suggestion.
### Motivation
Using stop sequences for `ChatAnthropic` with `ConversationChain` and `ConversationRetrievalChain` causes issues.
### Your contribution
Yes, I'd be happy to create a PR for this. | stop sequences as a parameter for ChatAnthropic cannot be added | https://api.github.com/repos/langchain-ai/langchain/issues/10455/comments | 2 | 2023-09-11T16:42:36Z | 2023-12-19T00:48:23Z | https://github.com/langchain-ai/langchain/issues/10455 | 1,890,888,136 | 10,455 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain uses max_elements parameter to build hnsw index. But since 0.3.2 version of pg_embedding it is not exists.
The error is:
`Failed to create HNSW extension or index: (psycopg2.errors.InvalidParameterValue) unrecognized parameter "maxelements"`
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create Neon DB as an example in their cloud
### Expected behavior
PGEmbedding.from_embeddings.create_hnsw_index should run migration without errors | hnsw in Postgres via Neon extention return error | https://api.github.com/repos/langchain-ai/langchain/issues/10454/comments | 2 | 2023-09-11T16:27:02Z | 2023-12-18T23:46:18Z | https://github.com/langchain-ai/langchain/issues/10454 | 1,890,863,082 | 10,454 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
HI, I try to use RedisChatMessageHistory but there is an error:
Error 97 connecting to localhost:6379. Address family not supported by protocol
However, another URL is defined:
```
REDIS_URL = f"redis://default:mypassword@redis-17697.c304.europe-west1-2.gce.cloud.redislabs.com:17697/0"
history = RedisChatMessageHistory(session_id='2', url=REDIS_URL, key_prefix='LILOK')
```
The Redis server is external, the VPC is disabled for the Lambda.
**Full error:**
```
[ERROR] ConnectionError: Error 97 connecting to localhost:6379. Address family not supported by protocol.
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 45, in lambda_handler
history.add_user_message(text)
File "/opt/python/langchain/schema/chat_history.py", line 46, in add_user_message
self.add_message(HumanMessage(content=message))
File "/opt/python/langchain/memory/chat_message_histories/redis.py", line 56, in add_message
self.redis_client.lpush(self.key, json.dumps(_message_to_dict(message)))
File "/opt/python/redis/commands/core.py", line 2734, in lpush
return self.execute_command("LPUSH", name, *values)
File "/opt/python/redis/client.py", line 505, in execute_command
conn = self.connection or pool.get_connection(command_name, **options)
File "/opt/python/redis/connection.py", line 1073, in get_connection
connection.connect()
File "/opt/python/redis/connection.py", line 265, in connect
raise ConnectionError(self._error_message(e))
```
**Full code:**
```
import os
import json
import requests
from langchain.memory import RedisChatMessageHistory
from langchain import OpenAI
from langchain.chains.conversation.memory import ConversationBufferMemory
from langchain.chains import ConversationChain
TELEGRAM_TOKEN = 'mytoken'
TELEGRAM_URL = f"https://api.telegram.org/bot{TELEGRAM_TOKEN}/"
def lambda_handler(event, context):
REDIS_URL = f"redis://default:mypassword@redis-17697.c304.europe-west1-2.gce.cloud.redislabs.com:17697/0"
history = RedisChatMessageHistory(session_id='2', url=REDIS_URL, key_prefix='LILOK')
llm = OpenAI(model_name='text-davinci-003',
temperature=0,
max_tokens = 256)
memory = ConversationBufferMemory()
conversation = ConversationChain(
llm=llm,
verbose=True,
memory=memory
)
history = RedisChatMessageHistory("foo")
# Log the received event for debugging
print("Received event: ", json.dumps(event, indent=4))
message = json.loads(event['body'])
# Check if 'message' key exists in the event
if 'message' in message:
chat_id = message['message']['chat']['id']
text = message['message'].get('text', '')
if text == '/start':
send_telegram_message(chat_id, "Hi!")
else:
history.add_user_message(text)
result = conversation.predict(input=history.messages)
history.add_ai_message(result)
send_telegram_message(chat_id, result)
else:
print("No 'message' key found in the received event")
return {
'statusCode': 400,
'body': json.dumps("Bad Request: No 'message' key")
}
return {
'statusCode': 200
}
def send_telegram_message(chat_id, message):
url = TELEGRAM_URL + f"sendMessage?chat_id={chat_id}&text={message}"
requests.get(url)
```
Please advise
### Suggestion:
_No response_ | Error 97 connecting to localhost:6379. Address family not supported by protocol | https://api.github.com/repos/langchain-ai/langchain/issues/10453/comments | 4 | 2023-09-11T16:06:18Z | 2023-09-11T23:23:39Z | https://github.com/langchain-ai/langchain/issues/10453 | 1,890,830,662 | 10,453 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain: 0.0.285
Platform: OSX Ventura (apple silicon)
Python version: 3.11
### Who can help?
@gregnr since it looks like you added the [Supabase example code](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/supabase_self_query)
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create fresh conda env with python 3.11
2. Install JupyterLap and create notebook
3. Follow the steps in the [Supabase example code](https://python.langchain.com/docs/modules/data_connection/retrievers/self_query/supabase_self_query) tutorial
On the step to:
```
vectorstore = SupabaseVectorStore.from_documents(
docs,
embeddings,
client=supabase,
table_name="documents",
query_name="match_documents"
)
```
it fails with error `JSONDecodeError: Expecting value: line 1 column 1 (char 0)`:
<details>
<summary>Stacktrace</summary>
```
---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
Cell In[10], line 1
----> 1 vectorstore = SupabaseVectorStore.from_documents(
2 docs,
3 embeddings,
4 client=supabase,
5 table_name="documents",
6 query_name="match_documents"
7 )
File /opt/miniconda3/envs/self-query-experiment/lib/python3.11/site-packages/langchain/vectorstores/base.py:417, in VectorStore.from_documents(cls, documents, embedding, **kwargs)
415 texts = [d.page_content for d in documents]
416 metadatas = [d.metadata for d in documents]
--> 417 return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File /opt/miniconda3/envs/self-query-experiment/lib/python3.11/site-packages/langchain/vectorstores/supabase.py:147, in SupabaseVectorStore.from_texts(cls, texts, embedding, metadatas, client, table_name, query_name, ids, **kwargs)
145 ids = [str(uuid.uuid4()) for _ in texts]
146 docs = cls._texts_to_documents(texts, metadatas)
--> 147 cls._add_vectors(client, table_name, embeddings, docs, ids)
149 return cls(
150 client=client,
151 embedding=embedding,
152 table_name=table_name,
153 query_name=query_name,
154 )
File /opt/miniconda3/envs/self-query-experiment/lib/python3.11/site-packages/langchain/vectorstores/supabase.py:323, in SupabaseVectorStore._add_vectors(client, table_name, vectors, documents, ids)
320 for i in range(0, len(rows), chunk_size):
321 chunk = rows[i : i + chunk_size]
--> 323 result = client.from_(table_name).upsert(chunk).execute() # type: ignore
325 if len(result.data) == 0:
326 raise Exception("Error inserting: No rows added")
File /opt/miniconda3/envs/self-query-experiment/lib/python3.11/site-packages/postgrest/_sync/request_builder.py:62, in SyncQueryRequestBuilder.execute(self)
53 r = self.session.request(
54 self.http_method,
55 self.path,
(...)
58 headers=self.headers,
59 )
61 try:
---> 62 return APIResponse.from_http_request_response(r)
63 except ValidationError as e:
64 raise APIError(r.json()) from e
File /opt/miniconda3/envs/self-query-experiment/lib/python3.11/site-packages/postgrest/base_request_builder.py:154, in APIResponse.from_http_request_response(cls, request_response)
150 @classmethod
151 def from_http_request_response(
152 cls: Type[APIResponse], request_response: RequestResponse
153 ) -> APIResponse:
--> 154 data = request_response.json()
155 count = cls._get_count_from_http_request_response(request_response)
156 return cls(data=data, count=count)
File /opt/miniconda3/envs/self-query-experiment/lib/python3.11/site-packages/httpx/_models.py:756, in Response.json(self, **kwargs)
754 if encoding is not None:
755 return jsonlib.loads(self.content.decode(encoding), **kwargs)
--> 756 return jsonlib.loads(self.text, **kwargs)
File /opt/miniconda3/envs/self-query-experiment/lib/python3.11/json/__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
341 s = s.decode(detect_encoding(s), 'surrogatepass')
343 if (cls is None and object_hook is None and
344 parse_int is None and parse_float is None and
345 parse_constant is None and object_pairs_hook is None and not kw):
--> 346 return _default_decoder.decode(s)
347 if cls is None:
348 cls = JSONDecoder
File /opt/miniconda3/envs/self-query-experiment/lib/python3.11/json/decoder.py:337, in JSONDecoder.decode(self, s, _w)
332 def decode(self, s, _w=WHITESPACE.match):
333 """Return the Python representation of ``s`` (a ``str`` instance
334 containing a JSON document).
335
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
339 if end != len(s):
File /opt/miniconda3/envs/self-query-experiment/lib/python3.11/json/decoder.py:355, in JSONDecoder.raw_decode(self, s, idx)
353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
--> 355 raise JSONDecodeError("Expecting value", s, err.value) from None
356 return obj, end
JSONDecodeError: Expecting value: line 1 column 1 (char 0)
```
</details>
It appears that Supabase is returning a 201 response code, with an empty body in the response. Then the posgrest library is trying to parse the json with `data = request_response.json()`, but that fails due to the empty body.
Are there some extra headers that should be added to the supabase client to tell it return a response body?
### Expected behavior
No error when invoking `SupabaseVectorStore.from_documents()` | Error creating Supabase vector store when running self-query example code | https://api.github.com/repos/langchain-ai/langchain/issues/10447/comments | 6 | 2023-09-11T14:21:18Z | 2023-09-12T07:04:17Z | https://github.com/langchain-ai/langchain/issues/10447 | 1,890,633,505 | 10,447 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Similarly to `memory=ConversationSummaryBufferMemory(llm=llm, max_token_limit=n)` passed in `initialize_agent`, there should be a possibility to pass `ConversationSummaryBufferMemory` like-object which would summarize the `intermediate_steps` in the agent if the `agent_scratchpad` created from the `intermediate_steps` exceeds `n` tokens
### Motivation
Agents can run out of the context window when solving a complex problem with tools.
### Your contribution
I can't commit to anything for now. | Summarize agent_scratchpad when it exceeds n tokens | https://api.github.com/repos/langchain-ai/langchain/issues/10446/comments | 12 | 2023-09-11T14:16:50Z | 2024-04-01T20:03:40Z | https://github.com/langchain-ai/langchain/issues/10446 | 1,890,624,612 | 10,446 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
It does not list `tiktoken` as a dependency, and while trying to run the code to create the `SupabaseVectorStore.from_documents()`, I got this error:
```
ImportError: Could not import tiktoken python package. This is needed in order to for OpenAIEmbeddings. Please install it with `pip install tiktoken`.
```
### Idea or request for content:
Add a new dependency to `pip install tiktoken`
cc @gregnr | DOC: Supabase Vector self-querying | https://api.github.com/repos/langchain-ai/langchain/issues/10444/comments | 2 | 2023-09-11T13:20:54Z | 2023-09-12T07:01:13Z | https://github.com/langchain-ai/langchain/issues/10444 | 1,890,500,153 | 10,444 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am getting following error after a period of inactivity, However, the issue resolves itself when I restart the server and run the same query.
Retrying langchain.llms.openai.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=600).
How can I fix this issue?
### Suggestion:
_No response_ | Issue: Request timeout | https://api.github.com/repos/langchain-ai/langchain/issues/10443/comments | 3 | 2023-09-11T12:12:49Z | 2024-02-11T16:14:56Z | https://github.com/langchain-ai/langchain/issues/10443 | 1,890,374,565 | 10,443 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version: 0.0.285
Python version: 3.11.2
Platform: x86_64 Debian 12.2.0-14
Weaviate 1.21.2 as vectorstore
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Following the instructions [here](https://python.langchain.com/docs/modules/data_connection/indexing#quickstart),
`from langchain.indexes import SQLRecordManager, index` returns the following warning:
```
/lib/python3.11/site-packages/langchain/indexes/_sql_record_manager.py:38: MovedIn20Warning: The ``declarative_base()`` function is now available as sqlalchemy.orm.declarative_base(). (deprecated since: 2.0) (Background on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)
Base = declarative_base()
```
LangChain's [indexes documentation](https://api.python.langchain.com/en/latest/api_reference.html#module-langchain.indexes) doesn't include `SQLRecordManager`. Additionally, `RecordManager` [documentation ](https://api.python.langchain.com/en/latest/indexes/langchain.indexes.base.RecordManager.html#langchain-indexes-base-recordmanager)doesn't mention it can be used with SQLite.
### Expected behavior
No warnings. | Warning using SQLRecordManager | https://api.github.com/repos/langchain-ai/langchain/issues/10439/comments | 2 | 2023-09-11T08:32:27Z | 2024-02-02T04:10:52Z | https://github.com/langchain-ai/langchain/issues/10439 | 1,889,969,284 | 10,439 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
The current [Weaviate documentation](https://python.langchain.com/docs/integrations/providers/weaviate) in LangChain doesn't include instructions for setting up Weaviate's Schema to integrate it properly with LangChain. This will prevent any future issues like this one: #10424
### Idea or request for content:
Include in the documentation a reference to [Weaviate Auto-Schema](https://weaviate.io/developers/weaviate/config-refs/schema#auto-schema), explaining this is the default behavior when a `Document` is loaded to a Weaviate vectorstore. Also, give examples of how the Schema JSON file can be adjusted to work without problems with LangChain. | DOC: Include instructions for Weaviate Schema Configuration | https://api.github.com/repos/langchain-ai/langchain/issues/10438/comments | 2 | 2023-09-11T08:03:25Z | 2023-12-25T16:08:34Z | https://github.com/langchain-ai/langchain/issues/10438 | 1,889,911,805 | 10,438 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
Hello
I am using langchain's babyagi here I need to to create a custom tool
in this custom tool in function logic i need to do some operations based on file how can I do it
### Suggestion:
_No response_ | Issue: babyagi agent custom tool file operation usage | https://api.github.com/repos/langchain-ai/langchain/issues/10437/comments | 3 | 2023-09-11T06:42:00Z | 2023-12-25T16:08:40Z | https://github.com/langchain-ai/langchain/issues/10437 | 1,889,781,015 | 10,437 |
[
"hwchase17",
"langchain"
]
| ### System Info
- langchain v0.0.285
- transformers v4.32.1
- Windows10 Pro (virtual machine, running on a Server with several virtual machines!)
- 32 - 100GB Ram
- AMD Epyc
- 2x Nvidia RTX4090
- Python 3.10
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Hey guys,
I think there is a problem with "HuggingFaceInstructEmbeddings".
When using:
```
embeddings = HuggingFaceInstructEmbeddings(model_name="hkunlp/instructor-xl", cache_folder="testing")
vectorstore = FAISS.from_texts(texts=text_chunks, embedding=embeddings)
```
or
```
embeddings = HuggingFaceInstructEmbeddings(model_name="hkunlp/instructor-xl")
vectorstore = FAISS.from_texts(texts=text_chunks, embedding=embeddings)
```
or
```
embeddings = HuggingFaceInstructEmbeddings(model_name="intfloat/multilingual-e5-large", model_kwargs={"device": "cuda:0"})
db = Chroma.from_documents(documents=texts, embedding=embeddings, collection_name="snakes", persist_directory="db")
```
In my opinion, the problem always seems to occur in the 2nd line from each example - when `embedding=embeddings` is used. Shortly after printing "512 Tokens used" (or similar Text) Then the complete server breaks down and is switched off.
Sometimes the System can run the task and pastes errors like "can't find the HUGGINGFACEHUB_API_TOKEN". But if i run the Code again (without having changed anything) **_the Server_** (not only my Virtual Machine) switches off :(
We can't find any Error message in the Windows system logs, and no Error Message on the Server
### Expected behavior
Running the Code. Maybe the Problem is by using it on Virtual Machines?
I don't know, but always switching off the whole server is a big Problem for our company - i hope you can help me :) | Use "HuggingFaceInstructEmbeddings" --> powering down the whole Server with all running VMs :( | https://api.github.com/repos/langchain-ai/langchain/issues/10436/comments | 8 | 2023-09-11T04:58:51Z | 2023-09-13T17:35:43Z | https://github.com/langchain-ai/langchain/issues/10436 | 1,889,662,620 | 10,436 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Hi
Currently, min_seconds and max_seconds of create_base_retry_decorator are hard-coded values. Can you please make these parameters configurable so that we can pass these values from AzureChatOpenAI similar to max_retries
eg: llm = AzureChatOpenAI(
deployment_name=deployment_name,
model_name=model_name,
max_tokens=max_tokens,
temperature=0,
max_retries=7,
min_seconds=20,
max_seconds=60
)
### Motivation
Setting these values will help with RateLimiterror. Currently, these parameters need to be updated in the library files, which is impractical to set up in all deployed environments.
### Your contribution
NA | keep min_seconds and max_seconds of create_base_retry_decorator configurable | https://api.github.com/repos/langchain-ai/langchain/issues/10435/comments | 3 | 2023-09-11T04:56:47Z | 2024-02-20T16:08:26Z | https://github.com/langchain-ai/langchain/issues/10435 | 1,889,660,615 | 10,435 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am using OpenAIFunctionsAgent with langchain-0.0.285, parse tool input occurs frequently when provided an input
Could not parse tool input: {'name': 'AI_tool', 'arguments': 'What is a pre-trained chatbot?'} because the arguments is not valid JSON.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
retriever = db.as_retriever() # Milvus
tool = create_retriever_tool(
retriever,
"document_search_tool",
"useful for answering questions related to XXXXXXXX."
)
tool_sales = create_retriever_tool(
retriever,
"sales_tool",
"useful for answering questions related to buying or subscribing XXXXXXXX."
)
tool_support = create_retriever_tool(
retriever,
"support_tool",
"useful for when you need to answer questions related to support humans on XXXXXXXX."
)
tools = [tool, tool_sales]
llm = ChatOpenAI(model_name="gpt-3.5-turbo-0613", temperature=0.3)
system_message = SystemMessage(
content=(
"You are a digital team member of XXXXXXXX Organization, specialising in XXXXXXXX."
"Always respond and act as an office manager of XXXXXXXX, never referring to the XXXXXXXX "
"as an external or separate entity. "
"* Please answer questions directly from the context, and strive for brevity, keeping answers under 30 words."
"* Convey information in a manner that's both professional and empathetic, embodying the values of XXXXXXXX."
)
)
prompt = OpenAIFunctionsAgent.create_prompt(
system_message=system_message,
extra_prompt_messages=[MessagesPlaceholder(variable_name="chat_history")]
)
agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt, verbose=True)
memory = ConversationBufferWindowMemory(memory_key="chat_history", return_messages=True, k=6)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
memory=memory,
verbose=True,
)
result = agent({"input": question, "chat_history": chat_history})
answer = str(result["output"])
print(answer)
### Expected behavior
i need to remove the error | Could not parse tool input: {'name': 'AI_tool', 'arguments': 'What is a pre-trained chatbot?'} because the arguments is not valid JSON. | https://api.github.com/repos/langchain-ai/langchain/issues/10433/comments | 5 | 2023-09-11T03:44:09Z | 2024-01-03T09:32:01Z | https://github.com/langchain-ai/langchain/issues/10433 | 1,889,593,320 | 10,433 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Could you add an implementation of BaseChatModel using CTransformers?
### Motivation
I prefer to use a local model instead of an API. the LLM works, but I need the wrapper for it
### Your contribution
My failed attempt
```
from pydantic import BaseModel, Field
from typing import Any, List, Optional
from ctransformers import AutoModelForCausalLM, LLM
from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain.chat_models.base import SimpleChatModel
from langchain.schema import BaseMessage, HumanMessage
class CTransformersChatModel(SimpleChatModel, BaseModel):
ctransformers_model: LLM = Field(default_factory=AutoModelForCausalLM)
def __init__(self, model_path: str, model_type: Optional[str] = "llama", **kwargs: Any):
super().__init__(**kwargs)
self.ctransformers_model = AutoModelForCausalLM.from_pretrained(model_path)
def _call(
self,
messages: List[BaseMessage],
stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None,
**kwargs: Any,
) -> str:
# Convert messages to string prompt
prompt = " ".join([message.content for message in messages if isinstance(message, HumanMessage)])
return self.ctransformers_model(prompt, stop=stop, run_manager=run_manager, **kwargs)
@property
def _llm_type(self) -> str:
"""Return type of chat model."""
return "ctransformers_chat_model"
``` | BaseChatModel implementation using CTransformers | https://api.github.com/repos/langchain-ai/langchain/issues/10427/comments | 2 | 2023-09-10T21:14:33Z | 2023-12-18T23:46:32Z | https://github.com/langchain-ai/langchain/issues/10427 | 1,889,328,945 | 10,427 |
[
"hwchase17",
"langchain"
]
| ### System Info
As far as I tried, this reproduced in many versions, including the latest `langchain==0.0.285`
### Who can help?
@agola11 @hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Using the following code
```
llm = ChatOpenAI(model_name="gpt-4", temperature=0, verbose=True) # sometimes with streaming=True
# example of one tool thats being used
loader = PyPDFLoader(insurance_file)
pages = loader.load_and_split()
faiss_index = FAISS.from_documents(pages, OpenAIEmbeddings())
health_insurance_retriever = faiss_index.as_retriever()
tool = create_retriever_tool(health_insurance_retriever, "health_insurance_plan",
"XXX Description")
agent_executor = create_conversational_retrieval_agent(
llm, [tool1, tool2], verbose=True, system_message="...")
agent_executor("Some question that requires usage of retrieval tools")
```
The results often (statistically, but reproduces pretty frequently) is returned with some references such as the following
```I'm sorry to hear that you're experiencing back pain. Let's look into your health insurance plan to see what coverage you have for this issue.
[Assistant to=functions.health_insurance_plan]
{
"__arg1": "back pain"
}
...
[Assistant to=functions.point_solutions]
{
"__arg1": "back pain"
}
````
### Expected behavior
Chain using the retrieval tools to actually query the vector store, instead of returning the placeholders
Thank you for your help! | Conversational Retrieval Agent returning partial output | https://api.github.com/repos/langchain-ai/langchain/issues/10425/comments | 4 | 2023-09-10T16:57:30Z | 2024-03-11T16:16:51Z | https://github.com/langchain-ai/langchain/issues/10425 | 1,889,238,903 | 10,425 |
[
"hwchase17",
"langchain"
]
| ### System Info
LangChain version: 0.0.276
Python version: 3.11.2
Platform: x86_64 Debian 12.2.0-14
Weaviate as vectorstore
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
import os, openai, weaviate
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Weaviate
from langchain.retrievers.weaviate_hybrid_search import WeaviateHybridSearchRetriever
openai.api_key = os.environ['OPENAI_API_KEY']
embeddings = OpenAIEmbeddings()
INDEX_NAME = 'LaborIA_VectorsDB'
client = weaviate.Client(
url = "http://10.0.1.21:8085",
additional_headers = {
"X-OpenAI-Api-Key": openai.api_key
}
)
weaviate = Weaviate(
client = client,
index_name = INDEX_NAME,
text_key = "text",
by_text = False,
embedding = embeddings,
)
hyb_weav_retriever = WeaviateHybridSearchRetriever(
client=client,
index_name=INDEX_NAME,
text_key="text",
attributes=[],
create_schema_if_missing=True,
)
returned_docs_hybrid = hyb_weav_retriever.get_relevant_documents(question, score=True)
```
This returns the following trace:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File <timed exec>:1
File [~/AI](https://vscode-remote+ssh-002dremote-002b10-002e0-002e1-002e21.vscode-resource.vscode-cdn.net/home/rodrigo/AI%20Project/~/AI) Project/jupyternbook/lib/python3.11/site-packages/langchain/schema/retriever.py:208, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, **kwargs)
206 except Exception as e:
207 run_manager.on_retriever_error(e)
--> 208 raise e
209 else:
210 run_manager.on_retriever_end(
211 result,
212 **kwargs,
213 )
File [~/AI](https://vscode-remote+ssh-002dremote-002b10-002e0-002e1-002e21.vscode-resource.vscode-cdn.net/home/rodrigo/AI%20Project/~/AI) Project/jupyternbook/lib/python3.11/site-packages/langchain/schema/retriever.py:201, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, **kwargs)
199 _kwargs = kwargs if self._expects_other_args else {}
200 if self._new_arg_supported:
--> 201 result = self._get_relevant_documents(
202 query, run_manager=run_manager, **_kwargs
203 )
204 else:
205 result = self._get_relevant_documents(query, **_kwargs)
File [~/AI](https://vscode-remote+ssh-002dremote-002b10-002e0-002e1-002e21.vscode-resource.vscode-cdn.net/home/rodrigo/AI%20Project/~/AI) Project/jupyternbook/lib/python3.11/site-packages/langchain/retrievers/weaviate_hybrid_search.py:113, in WeaviateHybridSearchRetriever._get_relevant_documents(self, query, run_manager, where_filter, score)
111 result = query_obj.with_hybrid(query, alpha=self.alpha).with_limit(self.k).do()
112 if "errors" in result:
--> 113 raise ValueError(f"Error during query: {result['errors']}")
115 docs = []
117 for res in result["data"]["Get"][self.index_name]:
ValueError: Error during query: [{'locations': [{'column': 6, 'line': 1}], 'message': 'get vector input from modules provider: VectorFromInput was called without vectorizer', 'path': ['Get', 'LaborIA_VectorsDB']}]
```
### Expected behavior
Returned relevant documents. | Weaviate Hybrid Search Returns Error | https://api.github.com/repos/langchain-ai/langchain/issues/10424/comments | 5 | 2023-09-10T16:55:14Z | 2024-01-26T00:43:23Z | https://github.com/langchain-ai/langchain/issues/10424 | 1,889,237,256 | 10,424 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Provide a parameter to determine whether to extract images from the pdf and give the support for it.
### Motivation
There may exist several images in pdf that contain abundant information but it seems that there is no support for extracting images from pdf when I read the code.
### Your contribution
I'd like to add the feature if it is really lacking. | Is there a support for extracting images from pdf? | https://api.github.com/repos/langchain-ai/langchain/issues/10423/comments | 3 | 2023-09-10T16:41:55Z | 2024-07-03T16:04:21Z | https://github.com/langchain-ai/langchain/issues/10423 | 1,889,225,613 | 10,423 |
[
"hwchase17",
"langchain"
]
| ### System Info
Issue: Agent + GmailToolkit sends message AND rootId to the recipient address
Current Behaviour:
1. Instruct the Agent to send a message to the recipient
2. Agent emails recipient with a message and then sends a new message of just the rootId (e.g., `r25406384....`)
Example:
<img width="1181" alt="Screenshot 2023-09-10 at 10 48 29 AM" src="https://github.com/langchain-ai/langchain/assets/94654154/68258bee-985e-4844-9ae8-00b81248d166">
Desired Behaviour:
1. Instruct the Agent to send a message to the recipient
2. Agent emails recipient with only the message and NOT the rootId
My initial suspicion is that this has to do with the prompting of the agent and the multistep process of writing, drafting, and sending all in one go. Currently looking into this and will add any updates/findings here. All help and suggestions welcome!
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run the agent notebook from the docs: https://python.langchain.com/docs/integrations/toolkits/gmail
### Expected behavior
Desired Behaviour:
1. Instruct the Agent to send a message to the recipient
2. Agent emails recipient with only the message and NOT the rootId | Gmail Toolkit sends message and rootId of message | https://api.github.com/repos/langchain-ai/langchain/issues/10422/comments | 2 | 2023-09-10T15:54:38Z | 2023-12-18T23:46:37Z | https://github.com/langchain-ai/langchain/issues/10422 | 1,889,195,706 | 10,422 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
### Problem:
In the _**libs/langchain/langchain/memory/token_buffer.py**_ file:
```
@property
def buffer(self) -> Any:
"""String buffer of memory."""
return self.buffer_as_messages if self.return_messages else self.buffer_as_str
@property
def buffer_as_str(self) -> str:
"""Exposes the buffer as a string in case return_messages is True."""
return get_buffer_string(
self.chat_memory.messages,
human_prefix=self.human_prefix,
ai_prefix=self.ai_prefix,
)
@property
def buffer_as_messages(self) -> List[BaseMessage]:
"""Exposes the buffer as a list of messages in case return_messages is False."""
return self.chat_memory.messages
```
The **True** and **False** words should be inverted in the **buffer_as_str** and **buffer_as_messages** methods' documentation.
See the logic in the **buffer** method's return:
> return self.buffer_as_messages if self.return_messages else self.buffer_as_str
### Correction:
Swap both words:
**True** :arrow_backward: :arrow_forward: **False**
To get that result:
```
@property
def buffer(self) -> Any:
"""String buffer of memory."""
return self.buffer_as_messages if self.return_messages else self.buffer_as_str
@property
def buffer_as_str(self) -> str:
"""Exposes the buffer as a string in case return_messages is False."""
return get_buffer_string(
self.chat_memory.messages,
human_prefix=self.human_prefix,
ai_prefix=self.ai_prefix,
)
@property
def buffer_as_messages(self) -> List[BaseMessage]:
"""Exposes the buffer as a list of messages in case return_messages is True."""
return self.chat_memory.messages
```
### Idea or request for content:
_No response_ | DOC: Inversion of 'True' and 'False' in ConversationTokenBufferMemory Property Comments | https://api.github.com/repos/langchain-ai/langchain/issues/10420/comments | 1 | 2023-09-10T11:28:47Z | 2023-09-12T13:12:36Z | https://github.com/langchain-ai/langchain/issues/10420 | 1,889,109,167 | 10,420 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
# Combine the LLM with the tools with make a ReAct agent
Currently, the document says , agent can take only user question . Is there any example to pass multiple parameters?
inputdata ={"input": COMPLEX_QUERY, "channel":"mychannel", "product":"myproduct"}
react_agent = initialize_agent(tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True)
react_agent.run(inputdata)
channel and product custom parameters that needs to be passed to Tool
### Idea or request for content:
_No response_ | DOC: Passing parameters to tools from Agent | https://api.github.com/repos/langchain-ai/langchain/issues/10419/comments | 2 | 2023-09-10T10:59:44Z | 2023-12-18T23:46:42Z | https://github.com/langchain-ai/langchain/issues/10419 | 1,889,098,692 | 10,419 |
[
"hwchase17",
"langchain"
]
| ### Feature request
In the [SmartLLMChain](https://python.langchain.com/docs/use_cases/more/self_check/smart_llm), I would like to randomize the temperature of the `ideation_llm` . It could have a positive impact on its creativity then evaluation.
We could ask for specific pairs of nb of llm/temperature or automate temperature distribution with classical ones
- Gaussian
- Poisson
- Uniform
- Exponential
- Geometric
- Log-Normal
- ...
### Motivation
Potentally enhance overall chain performances.
### Your contribution
I could write a blog post about the benchmark. | :bulb: SmartLLMChain > Rrandomized temperatures for ideation_llm for better crowd diversity simulation | https://api.github.com/repos/langchain-ai/langchain/issues/10418/comments | 1 | 2023-09-10T07:55:16Z | 2023-12-18T23:46:47Z | https://github.com/langchain-ai/langchain/issues/10418 | 1,889,035,389 | 10,418 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Is there a way to save options or examples instead of history with `memory.save_context()` in `VectorStoreRetrieverMemory`?
For example, if A, B, and C are expected as the answer to a certain question, A, B, and C are returned as the predict results for that question. The current memory function takes into account the flow of history, so it is not possible to select options or define behavior like the switch statement in programming languages.
If I just don't know about this feature, I'd appreciate it if you could let me know. In a past issue, if someone wanted to have a rule-based conversation, the answerer just only said to use `VectorStoreRetrieverMemory`, but no examples were introduced. If you have any simple exmaples, please let me know.
### Motivation
This is to stably control the chatbot's behavior.
### Your contribution
I checked and found out that this feature does not currently exist. | Save options or examples instead of history with memory.save_context() in VectorStoreRetrieverMemory | https://api.github.com/repos/langchain-ai/langchain/issues/10417/comments | 4 | 2023-09-10T07:29:26Z | 2023-09-21T09:49:16Z | https://github.com/langchain-ai/langchain/issues/10417 | 1,889,027,787 | 10,417 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Currently [llama-cpp-python](https://github.com/abetlen/llama-cpp-python#web-server) provides server package which acts like a drop-in replacement for the OpenAI API.
Is there any specific langchain LLM class which supports the above server or do we need to alter the existing `OpenAI` class with a different `openai_api_base` ?
### Motivation
I would like to have a dedicated machine or host which runs only the llama-cpp-python server wheres as the client which uses langchain should interact with just like we are doing with OpenAI.
### Your contribution
I would like to contribute but before that I need to check if there's any solution already available. | Support for llama-cpp-python server | https://api.github.com/repos/langchain-ai/langchain/issues/10415/comments | 9 | 2023-09-10T01:16:19Z | 2024-07-12T16:52:15Z | https://github.com/langchain-ai/langchain/issues/10415 | 1,888,924,696 | 10,415 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: 0.0285 - langchain.vectorstores.redis import Redis
Python 3.10.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Based on the [original documentation](https://python.langchain.com/docs/integrations/vectorstores/redis) the vectorstore is created using the Redis.from_documents() method
```
from langchain.vectorstores.redis import Redis
from langchain.embeddings import OpenAIEmbeddings
from langchain.chat_models import ChatOpenAI
documents_raw = [Document(page_content="This is Alice's phone number: 123-456-7890", lookup_str='', metadata=metadata, lookup_index=0), Document(page_content='Team: Angels "Payroll (millions)": 154.49 "Wins": 89', lookup_str='', metadata=metadata, lookup_index=0)]
embeddings = OpenAIEmbeddings()
llm = ChatOpenAI()
schema = {'text': [{'name': 'source'},
{'name': 'title'}, ],
'numeric': [{'name': 'created_at'}], 'tag': []}
index_name = "index_name_123"
rds = Redis.from_documents(
documents=documents_raw, # a list of Document objects from loaders or created
embedding=embeddings, # an Embeddings object
redis_url="redis://localhost:6379",
index_name=index_name,
index_schema=schema,
keys=["a", "b"] # this is my addition. Passing my custom keys, breaks the code
)
```
### Expected behavior
**Objective**: be able to use custom keys in Redis
**Problem**:
The Redis.from_documents() method has --> **kwargs: Any
It calls the `from_texts()` method, which calls the `from_texts_return_keys()`. This calls ` add_texts()` which contains a line --> `keys_or_ids = kwargs.get("keys", kwargs.get("ids"))`
Therefore if I understand correctly, I assume that both "keys" or "ids" would be valid keyword arguments as well from the from_documents() method. This would achieve storing documents using custom keys. However it raises:
```
File "C:\Users\user\.virtualenvs\project-c_T0zlg5\Lib\site-packages\redis\connection.py", line 1066, in get_connection
connection = self._available_connections.pop()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
IndexError: pop from empty list
During handling of the above exception, another exception occurred:
```
```
Traceback (most recent call last):
File "C:\Users\userabc\Music\project\pepe.py", line 98, in <module>
rds = Redis.from_documents(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\userabc\.virtualenvs\project-c_T0zlg5\Lib\site-packages\langchain\vectorstores\base.py", line 417, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\userabc\.virtualenvs\project-c_T0zlg5\Lib\site-packages\langchain\vectorstores\redis\base.py", line 488, in from_texts
instance, _ = cls.from_texts_return_keys(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\userabc\.virtualenvs\project-c_T0zlg5\Lib\site-packages\langchain\vectorstores\redis\base.py", line 405, in from_texts_return_keys
instance = cls(
^^^^
File "C:\Users\userabc\.virtualenvs\project-c_T0zlg5\Lib\site-packages\langchain\vectorstores\redis\base.py", line 274, in __init__
redis_client = get_client(redis_url=redis_url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\userabc\.virtualenvs\project-c_T0zlg5\Lib\site-packages\langchain\utilities\redis.py", line 127, in get_client
if _check_for_cluster(redis_client):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\userabc\.virtualenvs\project-c_T0zlg5\Lib\site-packages\langchain\utilities\redis.py", line 198, in _check_for_cluster
cluster_info = redis_client.info("cluster")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\userabc\.virtualenvs\project-c_T0zlg5\Lib\site-packages\redis\commands\core.py", line 1004, in info
return self.execute_command("INFO", section, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\userabc\.virtualenvs\project-c_T0zlg5\Lib\site-packages\redis\client.py", line 505, in execute_command
conn = self.connection or pool.get_connection(command_name, **options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\userabc\.virtualenvs\project-c_T0zlg5\Lib\site-packages\redis\connection.py", line 1068, in get_connection
connection = self.make_connection()
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\userabc\.virtualenvs\project-c_T0zlg5\Lib\site-packages\redis\connection.py", line 1108, in make_connection
return self.connection_class(**self.connection_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\userabc\.virtualenvs\project-c_T0zlg5\Lib\site-packages\redis\connection.py", line 571, in __init__ super().__init__(**kwargs)
TypeError: AbstractConnection.__init__() got an unexpected keyword argument 'keys'
```
| Redis Vectorstore: cannot set custom keys - got an unexpected keyword argument | https://api.github.com/repos/langchain-ai/langchain/issues/10411/comments | 6 | 2023-09-09T20:35:55Z | 2023-09-12T22:29:42Z | https://github.com/langchain-ai/langchain/issues/10411 | 1,888,867,362 | 10,411 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain version 0.0.285
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
CharacterTextSplitter has options for `chunk_size` and `chunk_overlap` but doesn't make use of them in the splitting. Though this is not technically erroneous, it is misleading given that a lot of the documentation shows CharacterTextSplitter with these arguments specified, implying that the class is creating desirably sized chunks when in reality it is not. Here is an [example](https://python.langchain.com/docs/integrations/vectorstores/activeloop_deeplake) of such documentation implying this.
Below is a code sample reproducing the problem. RecursiveCharacterTextSplitter works to reorganize the texts into chunks of the specified `chunk_size`, with chunk overlap where appropriate. Meanwhile, CharacterTextSplitter doesn't do this. You can observe the difference in the overlap behavior by printing out `texts_c` and `texts_rc`.
```
from langchain.schema.document import Document
from langchain.text_splitter import CharacterTextSplitter, RecursiveCharacterTextSplitter
doc1 = Document(page_content="Just a test document to assess splitting/chunking")
doc2 = Document(page_content="Short doc")
docs = [doc1, doc2]
text_splitter_c = CharacterTextSplitter(chunk_size=30, chunk_overlap=10)
text_splitter_rc = RecursiveCharacterTextSplitter(chunk_size=30, chunk_overlap=10)
texts_c = text_splitter_c.split_documents(docs)
texts_rc = text_splitter_rc.split_documents(docs)
max_chunk_c = max([ len(x.to_json()['kwargs']['page_content']) for x in texts_c])
max_chunk_rc = max([ len(x.to_json()['kwargs']['page_content']) for x in texts_rc])
print(f"Max chunk in CharacterTextSplitter output is of length {max_chunk_c}")
print(f"Max chunk in RecursiveCharacterTextSplitter output is of length {max_chunk_rc}")
```
### Expected behavior
Either remove the arguments from CharacterTextSplitter to avoid ambiguity, use RecursiveCharacterTextSplitter which performs the expected behavior of resizing into appropriately sized chunks, or add to CharacterTextSplitter a split_text function to perform the aforesaid expected behavior | CharacterTextSplitter doesn't break down text into specified chunk sizes | https://api.github.com/repos/langchain-ai/langchain/issues/10410/comments | 8 | 2023-09-09T20:23:27Z | 2024-05-09T07:21:57Z | https://github.com/langchain-ai/langchain/issues/10410 | 1,888,864,581 | 10,410 |
[
"hwchase17",
"langchain"
]
| ### System Info
In current tool the input from the user is taken using CMD(Command line), but how will use in case of web application?
### Who can help?
@hwchase17 @agola11 @eyurtsev
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
You can reproduce by using current code given in human as a tool.
### Expected behavior
It should not take human input from CMD, as it cause issue in web based application. | How to use human input as a tool in wen based application | https://api.github.com/repos/langchain-ai/langchain/issues/10406/comments | 4 | 2023-09-09T13:49:07Z | 2024-02-20T23:39:28Z | https://github.com/langchain-ai/langchain/issues/10406 | 1,888,750,607 | 10,406 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am using langchain==0.0.283 and openai==0.28.0.
There seems to be no mention to packaging as a dependency, but when I run my system under a docker image from python:3.10.12-slim as basis, packaging is missing.
So please add it explicitely as a dependency, such as packaging==21.3.
Thanks.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. install langchain as a dependency, such as langchain==0.0.283
2. run the code in a container environment such as: FROM python:3.10.12-slim
3. you will get runtime errors
### Expected behavior
runtime errors happening:
File "main/routes_bot.py", line 4, in init main.routes_bot
File "/usr/local/lib/python3.10/site-packages/langchain/__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/usr/local/lib/python3.10/site-packages/langchain/agents/__init__.py", line 31, in <module>
from langchain.agents.agent import (
File "/usr/local/lib/python3.10/site-packages/langchain/agents/agent.py", line 14, in <module>
from langchain.agents.agent_iterator import AgentExecutorIterator
File "/usr/local/lib/python3.10/site-packages/langchain/agents/agent_iterator.py", line 21, in <module>
from langchain.callbacks.manager import (
File "/usr/local/lib/python3.10/site-packages/langchain/callbacks/__init__.py", line 10, in <module>
from langchain.callbacks.aim_callback import AimCallbackHandler
File "/usr/local/lib/python3.10/site-packages/langchain/callbacks/aim_callback.py", line 5, in <module>
from langchain.schema import AgentAction, AgentFinish, LLMResult
File "/usr/local/lib/python3.10/site-packages/langchain/schema/__init__.py", line 28, in <module>
from langchain.schema.output_parser import (
File "/usr/local/lib/python3.10/site-packages/langchain/schema/output_parser.py", line 21, in <module>
from langchain.schema.runnable import Runnable, RunnableConfig
File "/usr/local/lib/python3.10/site-packages/langchain/schema/runnable/__init__.py", line 1, in <module>
from langchain.schema.runnable._locals import GetLocalVar, PutLocalVar
File "/usr/local/lib/python3.10/site-packages/langchain/schema/runnable/_locals.py", line 15, in <module>
from langchain.schema.runnable.base import Input, Output, Runnable
File "/usr/local/lib/python3.10/site-packages/langchain/schema/runnable/base.py", line 58, in <module>
from langchain.utils.aiter import atee, py_anext
File "/usr/local/lib/python3.10/site-packages/langchain/utils/__init__.py", line 17, in <module>
from langchain.utils.utils import (
File "/usr/local/lib/python3.10/site-packages/langchain/utils/utils.py", line 9, in <module>
from packaging.version import parse
ModuleNotFoundError: No module named 'packaging' | installer is not requesting packaging but the code requires it in practice | https://api.github.com/repos/langchain-ai/langchain/issues/10404/comments | 5 | 2023-09-09T13:02:55Z | 2024-05-22T16:07:12Z | https://github.com/langchain-ai/langchain/issues/10404 | 1,888,734,168 | 10,404 |
[
"hwchase17",
"langchain"
]
| ### System Info
I am runing Django, and chromadb in docker
Django port 8001
chromadb port 8002
bellow snippet is inside django application on running it, it create a directory named chroma and there is a chroma.sqlite3 file and a dir named with randomly.
it didn't make any call to chromadb's service `chroma` at 8002
```
docs = loader.load()
emb = OpenAIEmbeddings()
chroma_settings = Settings()
chroma_settings.is_persistent = True
chroma_settings.chroma_server_host = "chroma"
chroma_settings.chroma_server_http_port = "8002"
# chroma_settings.persist_directory = "chroma/"
Chroma.from_documents(
client_settings=chroma_settings,
collection_name="chroma_db",
documents=docs,
embedding=emb,
# persist_directory=os.path.join(settings.BASE_DIR, "chroma_db")
)
```
on running
`HttpClient(host="chroma", port="8002").list_collections()` return `[]`
running `http://localhost:8002/api/v1/heartbeat` from browser shows `{"nanosecond heartbeat":1694261199976223880}`
versions info
```
langchain==0.0.285
openai==0.27.8
django-jazzmin==2.6.0
tiktoken==0.4.0
jq==1.4.1
chromadb==0.4.*
lark
```
docker-compose
```
version: "3.4"
x-common: &common
stdin_open: true
tty: true
restart: unless-stopped
networks:
- pharmogene
x-django-build: &django-build
build:
context: .
dockerfile: ./Dockerfile.dev
services:
django:
container_name: pharmogene-dc01
command:
- bash
- -c
- |
python manage.py collectstatic --no-input
python manage.py runserver 0.0.0.0:8000
ports:
- 8000:8000
env_file:
- config/env/dev/.django
volumes:
- ./:/code
- pharmogene_static_volume:/code/static
- pharmogene_media_volume:/code/media
depends_on:
- postgres
- redis
<<: [*common,*django-build]
chroma:
container_name: pharmogene-cdbc-01
# image: ghcr.io/chroma-core/chroma:latest
image: chromadb/chroma:0.4.10.dev2
command: uvicorn chromadb.app:app --reload --workers 1 --host 0.0.0.0 --port 8002 --log-config log_config.yml
volumes:
- ./:/code
# Default configuration for persist_directory in chromadb/config.py
# Currently it's located in "/chroma/chroma/"
environment:
- IS_PERSISTENT=TRUE
- PERSIST_DIRECTORY=${PERSIST_DIRECTORY:-/chroma/chroma}
ports:
- "8002:8002"
depends_on:
- redis
- postgres
- django
- celery
- celery_beat
<<: *common
networks:
pharmogene:
driver: bridge
volumes:
....
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. install latest version of chromadb and langchain in separate container
2. run chroma in docker using docker hub image
3. try to create embeddings
### Expected behavior
expected behaviour is Httpclient().list_collections() should return list of collections from chroma running inside other container. | Chrom from_documents not making embedding to remote chromadb server | https://api.github.com/repos/langchain-ai/langchain/issues/10403/comments | 2 | 2023-09-09T12:13:10Z | 2023-09-09T14:26:42Z | https://github.com/langchain-ai/langchain/issues/10403 | 1,888,719,111 | 10,403 |
[
"hwchase17",
"langchain"
]
| ### System Info
langchain 0.0.285 on Windows. Reproduceable script is attached
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains.openai_functions import create_openai_fn_chain
from dotenv import load_dotenv, find_dotenv
load_dotenv(find_dotenv())
database = [
{"name": "Salami", "price": 9.99},
{"name": "Margherita", "price": 8.99},
{"name": "Pepperoni", "price": 10.99},
{"name": "Hawaiian", "price": 11.49},
{"name": "Veggie Supreme", "price": 10.49},
]
def get_pizza_info(pizza_name: str) -> dict:
"""Retrieve information about a specific pizza from the database.
Args:
pizza_name (str): Name of the pizza.
Returns:
dict: A dictionary containing the pizza's name and price or a message indicating the pizza wasn't found.
"""
for pizza in database:
if pizza["name"] == pizza_name:
return pizza
return {"message": f"No pizza found with the name {pizza_name}."}
def add_pizza(pizza_name: str, price: float) -> dict:
"""Add a new pizza to the database.
Args:
pizza_name (str): Name of the new pizza.
price (float): Price of the new pizza.
Returns:
dict: A message indicating the result of the addition.
"""
for pizza in database:
if pizza["name"] == pizza_name:
return {"message": f"Pizza {pizza_name} already exists in the database."}
database.append({"name": pizza_name, "price": price})
return {"message": f"Pizza {pizza_name} added successfully!"}
llm = ChatOpenAI(model="gpt-3.5-turbo-0613", temperature=0)
template = """You are an AI chatbot having a conversation with a human.
Human: {human_input}
AI: """
prompt = PromptTemplate(input_variables=["human_input"], template=template)
chain = create_openai_fn_chain(
[get_pizza_info, add_pizza], llm, prompt, verbose=True
)
result1 = chain.run("I want to add the pizza 'Jumbo' for 13.99")
print(result1)
result2 = chain.run("Who are the main characters of the A-Team?") <- that code does not work
print(result2)
```
Traceback:
Traceback:
Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\output_parsers\openai_functions.py", line 28, in
parse_result
func_call = copy.deepcopy(message.additional_kwargs["function_call"])
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
KeyError: 'function_call'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\User\Desktop\LangChain\07_OpenAI_Functions\pizza_store.py", line 63, in <module>
result1 = chain.run("Who are the main characters of the A-Team?")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py", line 487, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py", line 292, in __call__
raise e
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py", line 286, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\llm.py", line 92, in _call
return self.create_outputs(response)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\llm.py", line 220, in create_outputs
result = [
^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\llm.py", line 223, in <listcomp>
self.output_key: self.output_parser.parse_result(generation),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\output_parsers\openai_functions.py", line 49, in
parse_result
function_call_info = super().parse_result(result)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\output_parsers\openai_functions.py", line 30, in
parse_result
raise OutputParserException(f"Could not parse function call: {exc}")
langchain.schema.output_parser.OutputParserException: Could not parse function call: 'function_call'
### Expected behavior
I would expect the similar behaviour to using the vanilla API.
```
def chat(query):
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo-0613",
messages=[{"role": "user", "content": query}],
functions=functions, # this is new
)
message = response["choices"][0]["message"]
return message
chat("What is the capital of france?")
```
If I run a query not related to the function, I will or will not add "function_call" to the output. I can handle this as follows:
```
if message.get("function_call"):
pizza_name = json.loads(message["function_call"]["arguments"]).get("pizza_name")
print(pizza_name)
function_response = get_pizza_info(
pizza_name=pizza_name
)
print(function_response)
```
Is there a workaround, does it work as intended or is that an unknown bug? I would normally just expect it to work without creating a workaround :)
| create_openai_fn_chain throws an error when not providing input not related to a function | https://api.github.com/repos/langchain-ai/langchain/issues/10397/comments | 4 | 2023-09-09T09:36:42Z | 2023-12-18T23:47:03Z | https://github.com/langchain-ai/langchain/issues/10397 | 1,888,670,258 | 10,397 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Any possible ways to run a Q&A bot for my fine-tuned Llama2 model in Google Colab?
### Motivation
Any possible ways to run a Q&A bot for my fine-tuned Llama2 model in Google Colab?
### Your contribution
Any possible ways to run a Q&A bot for my fine-tuned Llama2 model in Google Colab? | How to use my fine-tuned Llama2 model in Langchain? | https://api.github.com/repos/langchain-ai/langchain/issues/10395/comments | 9 | 2023-09-09T07:20:40Z | 2023-09-26T02:15:12Z | https://github.com/langchain-ai/langchain/issues/10395 | 1,888,620,991 | 10,395 |
[
"hwchase17",
"langchain"
]
| ### System Info
Langchain version: 0.0.272
Python version: 3.10
Host System: Windows 11
I'm loading Few Shot Prompts from a fewshot_prompts.yaml file by using load_prompt() function. The fewshot_prompts.yaml file has a section with the title "examples:" to load the Few Shot Prompts from file example_prompts.yaml. The files fewshot_prompts.yaml and example_prompts.yaml both are in the same directory. But the _load_examples() function is not able to locate/load the example_prompts.yaml. There is no way to specify the path to this file.
Due to the above issue, the loading of the example_prompts.yaml file fails.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The
_type: few_shot
input_variables:
["bot_response"]
prefix:
The following are excerpts from conversations with an AI assistant. Given an input text and a set of rules,the assistant strictly follows the rules and provides an "yes" or "no" answer.Here are some examples
example_prompt:
_type: prompt
input_variables:
["bot_response","answer"]
template:
"bot_response: {bot_response}\nanswer: {answer}"
examples:
example_prompts.yaml
**************************************************************************
Unable to find the file example_prompts.yaml
### Expected behavior
1. Provide a way to specify the path to load the example_prompts.yaml file | Issue with loading xxx_prompts.yaml file specified under "examples" section in the .yaml file passed as parameter in load_prompt() | https://api.github.com/repos/langchain-ai/langchain/issues/10390/comments | 8 | 2023-09-09T04:39:51Z | 2023-12-18T23:47:08Z | https://github.com/langchain-ai/langchain/issues/10390 | 1,888,574,094 | 10,390 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
[AIMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html#langchain-prompts-chat-aimessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is not sent to the user."
[HumanMessagePromptTemplate documentation](https://api.python.langchain.com/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html#langchain-prompts-chat-humanmessageprompttemplate) incorrectly and confusingly describes the message as "... This is a message that is sent to the user."
Compare to the documentation for [AIMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessage.html#langchain-schema-messages-aimessage) and [HumanMessage](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.HumanMessage.html#langchain-schema-messages-humanmessage), which correctly and clearly describe each message as "A message from an AI" and "A message from a human." respectively.
### Idea or request for content:
AIMessagePromptTemplate should be described as "AI message prompt template. This is a message that is sent to the user from the AI."
HumanMessagePromptTemplate should be described as "Human message prompt template. This is a message that is sent from the user to the AI."
These are clear, concise and consistent with documentation of the message schema.
I will submit a PR with revised docstrings for each class. This should, then, be reflected in the API reference documentation upon next build. | DOC: Incorrect and confusing documentation of AIMessagePromptTemplate and HumanMessagePromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/10378/comments | 2 | 2023-09-08T16:43:51Z | 2023-12-18T23:47:12Z | https://github.com/langchain-ai/langchain/issues/10378 | 1,888,011,222 | 10,378 |
[
"hwchase17",
"langchain"
]
| ### System Info
Python 3.10.8 running on mac mini in VS Code
### Who can help?
@asai95
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
attempt to import basemessage converter following customer storage guide here - https://python.langchain.com/docs/integrations/memory/sql_chat_message_history
Enter from langchain.memory.chat_message_histories.sql import BaseMessageConverter
Actual behavior - ImportError: cannot import name 'BaseMessageConverter' from 'langchain.memory.chat_message_histories.sql' (/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/memory/chat_message_histories/sql.py)
### Expected behavior
Being able to import the BaseMessageConverter | Couldn't import BaseMessageConverter from | https://api.github.com/repos/langchain-ai/langchain/issues/10377/comments | 5 | 2023-09-08T16:06:52Z | 2023-09-09T00:46:51Z | https://github.com/langchain-ai/langchain/issues/10377 | 1,887,955,487 | 10,377 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
I would like to get clarification of proper setting of Function Agents as Tools for other Agents.
In [Agent Tools Docs](https://python.langchain.com/docs/modules/agents/tools/) said that **tools can be even other agents but there are lack of examples**
I have implemented the following code with 2 approaches - Inheriting from the `BaseTool` interface to convert a Function Agent into a Tool object and I have also tried `Tool.from_function()` approach. My goal is to pre-define custom business formulas to be executed precisely as required and OpenAI Functions suit my needs well. For this purpose, I must use the Function Agent and I have followed this [Custom functions with OpenAI Functions Agent guide](https://python.langchain.com/docs/modules/agents/how_to/custom-functions-with-openai-functions-agent). However, there's a need to use other types of tools as well, requiring the initialization of different Agent types to invoke them. I'm seeking a solution where I can have a centralized Agent capable of invoking other agents. Currently, I can see only a solution using Classification Chain model for this purpose as provided below cases don't work
### Current Behavior
The current structure of the code exhibits the following issues:
- The `function_agent` executes FOO OpenAI Function and other tasks perfectly.
- The `main_agent` throws errors and occasionally manages to execute the FOO OpenAI function. Not sure why it's able to run successfully once in time.
### Desired Behavior
I would like the `functions_agent` to function seamlessly within the `main_agent` so that it performs the same well.
I believe there might be an issue with how the `message query` is being passed to OpenAI Python Functions in the two distinct approaches outlined below. Can you please offer guidance on the correct method to initialize the Function agent as a tool for another agent or suggest an alternative approach to achieve the desired behavior. Thank you
```python
from langchain.agents import initialize_agent
from langchain.chat_models import ChatOpenAI
from langchain.agents import AgentType, Tool
from pydantic import BaseModel, Field
from langchain.tools import BaseTool
from typing import Type
from dotenv import load_dotenv, find_dotenv
_ = load_dotenv(find_dotenv())
def foo(target):
"""Useful when you want to get current mean and std
"""
return {"TARGET": target, "number": 123}
class FooFuncInput(BaseModel):
"""Inputs parsed from LLM for 'foo' function"""
target: str = Field(description="Allows for manual interaction using either hands or legs. "
"allowable values: {`hands`, `legs`} ")
class FooFuncTool(BaseTool):
name = "foo"
description = f"""
Calculate {name} business function.
"""
args_schema: Type[BaseModel] = FooFuncInput
return_direct = True
verbose = False
def _run(self, target: str):
response = foo(target)
return response
def _arun(self):
raise NotImplementedError("foo does not support async")
def get_function_tools():
tools = [
FooFuncTool(),
]
return tools
functions_agent = initialize_agent(
tools=get_function_tools(),
llm=ChatOpenAI(model="gpt-3.5-turbo", temperature=0),
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True,
max_iterations=1,
early_stopping_method="generate"
)
class AGENT_FUNCTION_TOOL(BaseTool):
name = "AGENT_FUNCTION_TOOL"
description = """
Useful for when you need to calculate business functions
Names of allowable business functions: `ATT`, `DDT`, `FOO`
"""
return_direct = True
verbose = True
def _run(self, query: str):
response = functions_agent.run(query)
return response
def _arun(self):
raise NotImplementedError("AGENT_FUNCTION_TOOL does not support async")
tools = [
AGENT_FUNCTION_TOOL(),
# Tool.from_function(
# func=functions_agent.run,
# name="Defined Business functions agent",
# description="""
# Useful for when you need to calculate business functions
# Names of allowable business functions: `ATT`, `DDT`, `FOO`
# """),
]
assistant_target = 'business management'
agent_kwargs = \
{'prefix': f'You are friendly {assistant_target} assistant.'
f'questions related to {assistant_target}. You have access to the following tools:'}
main_agent = initialize_agent(
tools=tools,
llm=ChatOpenAI(model="gpt-3.5-turbo", temperature=0),
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
max_iterations=1,
early_stopping_method="generate",
agent_kwargs=agent_kwargs
)
message = 'calculate business function `FOO` and do it with bare hands'
main_agent.run(message)
```
Also I have tried using `Tool.from_function()` method as explained on [reddit can an agent be a tool thread](https://www.reddit.com/r/LangChain/comments/13618qu/can_an_agent_be_a_tool/) but it looks like `message` query is not properly passed to OpenAI python Functions also.
```python
Tool.from_function(
func=functions_agent.run,
name="Defined Business functions agent",
description="""
Useful for when you need to calculate business functions
Names of allowable business functions: `ATT`, `DDT`, `FOO`
"""),
```
---------
Executed script using `main_agent` with `AGENT_FUNCTION_TOOL`, class implemented from `BaseTool` langchain interface

---------
Executed script using `main_agent` with `Tool.from_function()`

---------
Executing script using `functions_agent` and desired behavior:

### Idea or request for content:
How to properly use Function Agents as Tools which consequently can be used by other Agents. | DOC: How to properly initialize Function Agent as a Tool for other Agent | https://api.github.com/repos/langchain-ai/langchain/issues/10375/comments | 5 | 2023-09-08T15:33:54Z | 2024-05-14T08:03:21Z | https://github.com/langchain-ai/langchain/issues/10375 | 1,887,908,236 | 10,375 |
[
"hwchase17",
"langchain"
]
| ### System Info
Hi there :wave: ,
I am wondering why now redis is completely broken (most of the paramters names have been changed) and you did this is a minor version change - doesn't make any sense - and I'd like to know how to pass `k` to the retriever now and how to get the `k`; previous I could just do `retriever.k`
Thanks,
Fra
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just try to use redis now
### Expected behavior
I wouldn't except BREAKING changes from a minor version change | Why did you broke Redis completely in a minor version change | https://api.github.com/repos/langchain-ai/langchain/issues/10366/comments | 4 | 2023-09-08T13:30:27Z | 2023-09-10T06:59:18Z | https://github.com/langchain-ai/langchain/issues/10366 | 1,887,699,565 | 10,366 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Overall there are so many terms on top of each other and I do understand we are trading off growing fast to having clean, set terms but lets discuss about currently what is the common way to define an AGENT that uses openAI for reasoning engine, and answers questions using tools that we define.
What is the AgentExecutor if we can also run agent.run()
What is AgentExecutorIterator
We can define an agent like this:
```
agent = initialize_agent(tools, llm, agent=AgentType.OPENAI_FUNCTIONS,
agent_kwargs=agent_kwargs, verbose=True, return_intermediate_steps=True,)
```
But also we can define an agent like this: [(From this documentation: Custom LLM Agent)](https://python.langchain.com/docs/modules/agents/how_to/custom_llm_agent)
```
llm = OpenAI(temperature=0)
llm_chain = LLMChain(llm=llm, prompt=prompt)
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=output_parser,
stop=["\nObservation:"],
allowed_tools=tool_names
)
```
There are also ChatAgent, OpenAIAgent and more[ Classes](https://js.langchain.com/docs/api/agents/classes/)
I would like to have a custom agent that I use with openai api and I will give custom tools like weather functions or finance functions etc. What is the cleanest way to do this currently ? (8 September 2023)
### Idea or request for content:
_No response_ | DOC: What is the Difference between OpenAIAgent and agent=AgentType.OPENAI_FUNCTIONS | https://api.github.com/repos/langchain-ai/langchain/issues/10361/comments | 2 | 2023-09-08T10:16:55Z | 2023-12-18T23:47:22Z | https://github.com/langchain-ai/langchain/issues/10361 | 1,887,374,918 | 10,361 |
[
"hwchase17",
"langchain"
]
| ### Feature request
The Project [Basaran](https://github.com/hyperonym/basaran) lets you host an api that is similar to the OpenAI api, but with a self hosted llm. Support for such custom API LLMs would be great
### Motivation
Further democritizig LLMs
### Your contribution
I could test it | Own Api LLM | https://api.github.com/repos/langchain-ai/langchain/issues/10359/comments | 2 | 2023-09-08T10:00:51Z | 2023-12-18T23:47:27Z | https://github.com/langchain-ai/langchain/issues/10359 | 1,887,349,628 | 10,359 |
[
"hwchase17",
"langchain"
]
| ### mapreduce chain doesn't provide full response.
I'm currently using mapreduce chain to do news summarization. But the output doesn't provide full response. Is there any way to fix this issue? I'm following the guide of langchain api mapreduce chain example.
| Issue: The mapreduce chain doesn't generate full response. | https://api.github.com/repos/langchain-ai/langchain/issues/10357/comments | 2 | 2023-09-08T09:26:17Z | 2023-12-18T23:47:33Z | https://github.com/langchain-ai/langchain/issues/10357 | 1,887,295,643 | 10,357 |
[
"hwchase17",
"langchain"
]
| ### Feature request
Please update dependencies to support installing `pydantic>=2.0. Now there's a conflict, for example pip resolver gives following error:
```
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
langchainplus-sdk 0.0.20 requires pydantic<2,>=1, but you have pydantic 2.3.0 which is incompatible.
langchain 0.0.228 requires pydantic<2,>=1, but you have pydantic 2.3.0 which is incompatible.
```
### Motivation
pydantic 1.* is outdated. Starting new project and use old syntax which will be deprecated soon is unpleasant
### Your contribution
- | pydantic 2.0 support | https://api.github.com/repos/langchain-ai/langchain/issues/10355/comments | 1 | 2023-09-08T08:33:35Z | 2023-09-08T08:49:07Z | https://github.com/langchain-ai/langchain/issues/10355 | 1,887,216,037 | 10,355 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I am using ConversationalRetrievalChain for document Q/A bot and updating chat_history = [] with every message however I noticed this chat_history string is never added to final inference string. In _call method of BaseConversationalRetrievalChain class, even when "if chat_history_str:" condition is true, new_question is never updated with the chat_history_str.
### Suggestion:
_No response_ | ConversationalRetrievalChain is not adding chat_history to new message | https://api.github.com/repos/langchain-ai/langchain/issues/10353/comments | 1 | 2023-09-08T07:36:09Z | 2023-12-18T23:47:37Z | https://github.com/langchain-ai/langchain/issues/10353 | 1,887,127,834 | 10,353 |
[
"hwchase17",
"langchain"
]
| ### Feature request
like https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/document_loaders/confluence.py, we need a loader for quip document, please follow https://github.com/quip/quip-api/tree/master/python
### Motivation
I have a lot of documents in quip, want to load these quip documents.
### Your contribution
I like to contribute this idea if no one to take. | quip doc loader | https://api.github.com/repos/langchain-ai/langchain/issues/10352/comments | 5 | 2023-09-08T07:07:49Z | 2023-12-18T23:47:43Z | https://github.com/langchain-ai/langchain/issues/10352 | 1,887,085,921 | 10,352 |
[
"hwchase17",
"langchain"
]
| ### Issue you'd like to raise.
I was led to use GPT4All for building a personal chatbot. It generated undesired output. More precisely, it generates the all conversation, after writing the first prompt. For solving this problem, I instantiated GPT4All with a not empty list value for attribute `stop` :
`
llm = GPT4All(
model=local_path,
callbacks=callbacks,
verbose=True,
streaming=True,
stop=["System:"],
)
`
However, it continues to generate these undesired output.
### Suggestion:
Finally I may have found a small fix to solve this problem for this type of LLM. I intent to create a PR very soon, if you agree with this. | Issue: GPT4All LLM continue to generate undesired tokens even if stop attribute has been specified | https://api.github.com/repos/langchain-ai/langchain/issues/10345/comments | 6 | 2023-09-07T21:33:08Z | 2024-02-21T16:08:35Z | https://github.com/langchain-ai/langchain/issues/10345 | 1,886,588,480 | 10,345 |
[
"hwchase17",
"langchain"
]
| ### System Info
With Streaming turned on, verbose mode is turned on.
Am I doing something wrong?
Python 3.10.9
Name: langchain
Version: 0.0.284
llm2 = ChatOpenAI(
temperature=0,
streaming=True,
callbacks=[StreamingStdOutCallbackHandler()],
model="gpt-4-0613",
)
agent = initialize_agent(
tools,
llm2,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
memory=memory,
verbose=False,
)
Thought: Do I need to use a tool? Yes
Action: Search for Property Owners
Action Input: 5954A Bartonsville RoadDo I need to use a tool? No
AI: The house at 5954A Bartonsville Road is owned by Rainbown Johonson.The house at 5954A Bartonsville Road is owned by Rainbown Johonson.
Thought: Do I need to use a tool? Yes
Action: Search for Property Owners
Action Input: 1163 Annamarie WayDo I need to use a tool? Yes
Action: Search for People
Action Input: Randolph HillDo I need to use a tool? No
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
llm2 = ChatOpenAI(
temperature=0,
streaming=True,
callbacks=[StreamingStdOutCallbackHandler()],
model="gpt-4-0613",
)
agent = initialize_agent(
tools,
llm2,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
memory=memory,
#verbose=False,
)
print(
agent.run(
input="Who owns the house at the following address 5954A Bartonsville Road?"
)
)
### Expected behavior
I do not expect verbose output.
| Streaming Turns on Verbose mode for AgentType.CONVERSATIONAL_REACT_DESCRIPTION | https://api.github.com/repos/langchain-ai/langchain/issues/10339/comments | 3 | 2023-09-07T17:56:38Z | 2023-12-14T16:04:47Z | https://github.com/langchain-ai/langchain/issues/10339 | 1,886,350,516 | 10,339 |
[
"hwchase17",
"langchain"
]
| ### Issue with current documentation:
Hello.
I think there is little to no explanation as to which are the differences between `LLMSingleActionAgent` and `Agent` classes, and which is suitable for what scenario. Both classes inherit from `BaseSingleActionAgent`.
Thanks in advance.
### Idea or request for content:
_No response_ | DOC: LLMSingleActionAgent vs. Agent | https://api.github.com/repos/langchain-ai/langchain/issues/10338/comments | 2 | 2023-09-07T17:33:37Z | 2023-12-14T16:04:52Z | https://github.com/langchain-ai/langchain/issues/10338 | 1,886,322,923 | 10,338 |
[
"hwchase17",
"langchain"
]
| ### Feature request
I'd like a way to silence TextGen's output in my terminal. For example, a parameter would be perfect.
Looking at TextGen's source code, I see this line `print(prompt + result)`.
If I remove this line, then I get the desired effect where nothing is printed to my terminal.
### Motivation
I'm working on an app with lots of logging and thousands of requests are sent via TextGen.
My output is very noisy with every prompt and result printed, and it's troublesome to scroll through it all. I only care about the result I receive.
### Your contribution
My suggestion is to add a parameter flag to TextGen such that I can control the print (on or off), instead of it being hardcoded in.
There are 2 areas where I'd like to see it changed:
1. When streaming is enabled `print(prompt + combined_text_output)`
2. When streaming is disabled `print(prompt + result)`
| TextGen parameter to silence the print in terminal | https://api.github.com/repos/langchain-ai/langchain/issues/10337/comments | 2 | 2023-09-07T17:16:09Z | 2023-12-07T16:14:14Z | https://github.com/langchain-ai/langchain/issues/10337 | 1,886,302,306 | 10,337 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.